All of Halfwit's Comments + Replies

Halfwit20

A lot of people got this from shuttle launches, and so reacted negatively to the the (in my opinion good) arguments for focusing NASA's budget on robotic space exploration.

Halfwit30

Hmm, one way to maybe get around this would be to start an intrinsically motivating project but limit oneself to the tools one has to learn for extrinsic reasons.

Halfwit20

Then my advice is this: talk to someone who has the entry-level job you want and ask him or her what skills he/she needs to do it and what skills whoever hired him or her thinks one needs. Then learn them. As for the "oddly unable" thing, I suggest reflecting on how you learned what you are good at in the first place. If there's anything different about your current, ineffective approach to learning new techniques stop doing it. Unless you've recently suffered brain trauma, it's likely just some weird ugh field-like effect.

Halfwit20

Yeah, that does sound pretty awful, not something you'd want to induce. For me it was just this: pressure on my chest, inability to move my limbs, and the feeling that some entity was observing me. There was no gnashing of teeth.

Halfwit20

You're asking me for advice? That was the first time I've looked at code in my life. I'm sure the textbook recommendation thread has something on programming. From what I understand, though, halfway-decent programmers are very employable at the moment, so either you're overestimating your ability, there's some other factor you haven't shared, or my intuition on the employment prospects of halfway-decent programmers (I assume this means close to, if slightly below, the level of the average pro) is incorrect.

2MixedNuts
No, just very haphazard. I know how to do many things, but I don't know how to do many other, often easier, things, and I seem to have become oddly unable to learn. Of course nobody wants a CSS whiz who never learnt HTML5.
Halfwit90

I was lucky enough to have read about that before the one time it happened to me. So I wasn't scared. I just thought, So this is sleep paralysis. Since then I've read that lucid dreamers often try to force themselves into sleep paralysis, as it's the first stop to the sandman's brooding realm. The next time it happens to you, you should try for a Feynman-style lucid dream. It could be fun.

3Qiaochu_Yuan
I attempted to do this on a few occasions, but I don't know if it really worked. It wouldn't have worked during the middle period during which I was experiencing sleep paralysis attacks, where I was also clenching my teeth, and I could feel my teeth clenching. It felt like my teeth were going to explode. Very unpleasant.
Halfwit20

I edited because the code I looked at seemed to be atypical, comparing it to what others have posted. No, I don't think I'm M3 at all--though my father probably is, as he picked up programming in his twenties and knows many languages. As I had expected the code to look like nonsense, I was merely surprised I could get some idea about what was going on. My prior for being able to get a programming job with <300 hours of dedicated practice is low, but it could be something to investigate as a hobby.

Halfwit270

quickly check to see if you are a natural computer programmer by pulling up a page of Python source code and seeing whether it looks like it makes natural sense, and if this is the case you can teach yourself to program very quickly and get a much higher-paying job even without formal credentials.

I just did this. And I was surprised; this seemed far less inscrutable than I intuitively expected, having never read any code. My father is a computer programmer, so I may have it in my DNA. He is more intelligent than me though. Example, I once told him the ... (read more)

0Shmi
Whoa! 20 min and in his head? I wish I were that smart. EDIT: Given that I am average or worse at logic puzzles, and that I haven't heard this one before, somehow, I have decided to document my thought process as I was solving it. It certainly helps to know that there is a solution. Anyway, my explorations are documented here. Warning: the write-up is rather long and not edited for clarity. I was quite happy that I had found that you have 1 in 3 chances to solve the puzzle with just two questions (without "exploding god-heads")! The total time passed spent thinking and writing things up was probably several hours over several days, so an order of magnitude worse than your father :)
6MixedNuts
I'm not completely stupid. I used to be a decent programmer. I'm now a halfway-decent programmer. I'm unable to make any progress, and my ability to hold a job of any kind is dubious. What am I doing wrong?
Kawoomba100

Three gods puzzle (aka "The Hardest Logic Puzzle Ever", I didn't make that name up!) for reference. Try to solve the puzzle first, I've appended the text. The referenced link contains the solution.

Three gods A, B, and C are called, in no particular order, True, False, and Random. True always speaks truly, False always speaks falsely, but whether Random speaks truly or falsely is a completely random matter. Your task is to determine the identities of A, B, and C by asking three yes-no questions; each question must be put to exactly one god. The

... (read more)
Halfwit00

Some early science fiction isn't so much about conflict as it is a relation of an unlikely experience. But then, the stories I have in mind weren't exactly that great. So that's not exactly evidence against the assumption. Still, I think a sufficiently skilled writer could create an enjoyable story without conflict, but it would be like a painter throwing out a primary color.

One of my favorite of OP's short posts is Building Weirdtopia. (Yudkowsky's no spoilers approach to scientific pedagogy is such an intriguing one, I'm a quite sad he hasn't spun it in... (read more)

1Decius
I haven't found any exploration stories that don't have some form of 'actor versus environment' that is critical to making the writing engaging to the reader. I've also seen plenty of ways to shift the conflict meta, by means of what amounts to a framing device. I'm excluding descriptive fiction from 'narrative'; I'm not sure exactly what the boundary is, but describing how something works is different from describing somebody operating it.
2Kaj_Sotala
I don't know what you mean by "really a thing", but it has been used more than once, including some academic papers.
Halfwit270

I do tend to think that Aubrey de Grey's argument holds some water. That is, it's not so much general society that will be influenced as wealthy elites. Elites seem more likely to update when they read about a 2x mouse. I suppose the Less Wrong response to this argument would be: how many of them are signed up for cryonics? But cryonics is a lot harder to believe than life extension. You need to buy pattern identity theory and nanotechnology and Hanson's value of life calculations. In the case of LE, all you have to believe is that the techniques that work... (read more)

I suppose the Less Wrong response to this argument would be: how many of them are signed up for cryonics?

LessWrongers, and high-karma LessWrongers, on average seem to think cryonics won't work, with mean odds of 5:1 or more against cryonics (although the fact that they expect it to fail doesn't stop an inordinate proportion from trying it for the expected value).

On the other hand, if mice or human organs were cryopreserved and revived without brain damage or loss of viability, people would probably become a lot more (explicitly and emotionally) confide... (read more)

Halfwit20

I think we're past the point where it matters. If we had a few lost decades in the mid-twentieth century, maybe, (and just to be cognitively polite here, this is just my intuition talking) the intelligence explosion could have been delayed significantly. We are just a decade off from home computers with >100 teraflops, not to mention the distressing trend of neuromorphic hardware (Here's Ben Chandler of the SyNAPSE project talking about his work on HackerNews)With all this inertia, it would take an extremely large downturn to slow us now. Engineering a... (read more)

Halfwit60

The mathematician John von Neumann, born Neumann Janos in Budapest in 1903, was incomparably intelligent, so bright that, the Nobel Prize-winning physicist Eugene Wigner would say, "only he was fully awake." One night in early 1945, von Neumann woke up and told his wife, Klari, that "what we are creating now is a monster whose influence is going to change history, provided there is any history left. Yet it would be impossible not to see it through." Von Neumann was creating one of the first computers, in order to build nuclear weapons. But, Klari said, it was the computers that scared him the most.

Konstantin Kakaes

Halfwit310

The fact that MIRI is finally publishing technical research has impressed me. A year ago it seemed, to put it bluntly, that your organization was stalling, spending its funds on the full-time development of Harry Potter fanfiction and popular science books. Perhaps my intuition there was uncharitable, perhaps not. I don't know how much of your lead researcher's time was spent on said publications, but it certainly seemed, from the outside, that it was the majority. Regardless, I'm very glad MIRI is focusing on technical research. I don't know how much farther you have to walk, but it's clear you're headed in the right direction.

Halfwit20

I think you're an important guy to have around for reasons of evaporative cooling.

[This comment is no longer endorsed by its author]Reply
Halfwit00

The line I came up with, when asking the question to myself, was this: If the singularity is a religion, it is the only religion with a plausible mechanism of action.

Halfwit50

"Why do people worry about mad scientists? It's the mad engineers you have to watch out for." - Lochmon

0kpreid
— Miles Vorkosigan, Komarr by Lois McMaster Bujold
DanielLC250

Considering the "mad scientists" keep building stuff, perhaps the question is "Why do people keep calling mad engineers mad scientists?"

Halfwit20

I believe you can live off Boost for an indefinite period of time.

1CraigMichael
Made it really hard for me to poop normally.
Halfwit20

I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive, - Randall Munroe.

[This comment is no longer endorsed by its author]Reply
0BerryPick6
That's been posted at least twice before that I can remember.
Halfwit10

5% is pretty high considering the purported stakes.

-2Alsadius
Not necessarily. If it takes us 15 years to kludge something together that's twice as smart as a single human, I don't think it'll be capable of an intelligence explosion on any sort of time scale that could outmaneuver us. Even if the human-level AI can make something better in a tenth the time, we still have more than a year to react before even worrying about superhuman AI, never mind the sort of AI that's so far superhuman that it actually poses a threat to the established order. An AI explosion will have to happen in hardware, and hardware can't explode in capability so fast that it outstrips the ability of humans to notice it's happening. One machine that's about as smart as a human and takes millions of dollars worth of hardware to produce is not high stakes. It'll bugger up the legal system something fierce as we try to figure out what to do about it, but it's lower stakes than any of a hundred ordinary problems of politics. It requires an AI that is significantly smarter than a human, and that has the capability of upgrading itself quickly, to pose a threat that we can't easily handle. I suspect at least 4.9 of that 5% is similar low-risk AI. Just because the laws of physics allow for something doesn't mean we're on the cusp of doing it in the real world.
2lukeprog
No doubt!
Halfwit150

Untangling the Knot: A Users Guide to the Human Mind

Your Brain, an Owner's Manual

Less than One, Greater than Zero: The Sequences, 2006–2009

Approximating Omega (badly, of course)

Sharpening the Mace

Uncountable Infinite Shades of Grey (my apologies)

Stop Tripping Yourself: A Users Guide to the Human Mind

Marshaling the Mind: An Introduction to the Informed Art of Rationality

Motes and Meaning: The Less Wrong Archives

Of Motes and Meaning

Theory, in Practice

Thinking, in Practice

Thinking in Circles:Avoiding the Known Bugs in Human Reasoning.

0Mestroyer
The second one is already a book.
2[anonymous]
I like the first two especially.
2Rukifellth
Many of these are good
Halfwit00

It might be worth going to a sleep doctor; sleep apnea can really fuck up your metabolism, not to mention causing unbelievable akrasia. I would say sleep tests are a GOOD THING, something everyone should do. I had sleep apnea for years. It was like some eldritch monster was sucking away my willpower and I wasn't even aware. Within a few months of getting my mouth guard, which keeps my tongue from blocking my airway while in REM, I lost thirty pounds and gained an enormous well of mental stamina. A small minority of the "metabolically challenged" may just have undiagnosed sleep problems.

[This comment is no longer endorsed by its author]Reply
7Eliezer Yudkowsky
Since it was cheaper than a sleep study, I bought a self-adjusting CPAP on Craigslist and just tried it. Nothing miraculous occurred.
Halfwit00

He was an adviser. But I see he no longer is. Retracted.

Halfwit220

He killed himself; this is true. He faced 35 years of confinement and the very-real prospect of rape. This, too, is true. He was criminalized for his intent to freely distribute scientific knowledge. This makes him a hero. He broke, but only storybook heroes are unbreakable. It's depressing how society seems to persecute those most able to improve it; that the broken machine slays those very engineers who've dedicated their lives to its repair.

0novalis
Does Kurzweil have anything to do with the Singularity Institute? Because I don't see him listed as a director or advisor on their site.
Halfwit30

In terms of minimizing the status loss for academics affiliating with SIAI, a banal minimally-descriptive name may be superior. People often overestimate the value of the piquant. Beige may not excite, but it doesn't offend. Any term which has the potential to become a buzzword, or acquire alternative definitions, should be avoided. The more exciting the term, the higher the chance of appropriation.

This was the point I was trying to make; on rereading it after posting, I realized it was remarkably poorly written and wasn't even clearly conveying what I was thinking when I wrote it. I didn't have time to edit it then, so I retracted.

-2MugaSofer
Thank you for clarifying.
0A1987dM
BTW, here's an interesting blog post about considerations relevant to naming stuff.
Halfwit20

When a heuristic AI is creating a successor that shares its goals, does it insist on formally-verified self improvements? Does it try understanding its mushy, hazy goal system so as to avoid reifying something it would regret given its current goals? It seems to me like some mind eventually will have to confront the FAI issue, why not humans then?

0timtyler
If you check with Creating Friendly AI you will see that the term is defined by its primary proponent as follows: It's an anthropocentric term. Only humans would care about creating this sort of agent. You would have to redefine the term if you want to use it to refer to something more general.
0JoshuaFox
Apparently not. If it did do these things perfectly, it would not be what we are here calling the "heuristic AI."
Halfwit00

I highly support changing your name--there's all sorts of bad juju associated with the term "singularity". My advice, keep the new name as bland as possible, avoiding anything with even a remote chance of entering the popular lexicon. The term "singularity" has suffered the same fate as "cybernetics".

[This comment is no longer endorsed by its author]Reply
-4MugaSofer
I note that you've retracted you post, but I still feel the need to ask: shouldn't the name reflect what they do?
3gwern
Very positive too. Hard to ask for more favorable coverage than that.
Halfwit110

How much money would you need magicked to allow you to shed fundraising and infrastructure, etc, and just hire and hole up with a dream team of hyper-competent maths wonks? Restated, at which set amount would SIAI be comfortably able to aggressively pursue its long-term research?

1hairyfigment
He once mentioned a figure of US $10 million / year. Feels like he's made a similar remark more recently, but it didn't show in my brief search.
Halfwit60

The quantum lottery is my retirement plan, my messy messy retirement plan.

-1prase
Hitting somebody on his head by a baseball bat is likely to make him dead and the perpetrator imprisoned, which is certainly not the outcome I prefer. Not to mention the difficulties with applying this en masse. You should come up with much better methods.
Halfwit70

I just looked it up. That’s odd that there was little interest. There are so many advantages to a high-IQ child. Said child would likely need less years of child care, would require less attention academically and maybe attend college a few years earlier, likely with a full or partial scholarship. And in terms of maternal pride (i.e., signaling your own competence as a mother by talking about your child’s success) high-IQ sperm is a goldmine. Any single (or reproductively duplicitous) mother would be crazy not to select physicist or mathematician sperm, especially taking into account regression to the mean.

5gwern
Yeah, who knows what their true refusal is? There could be a lot of things: sperm donors are already screened for what is effectively high IQ via interest in the process and university degrees; the women don't want to take the risk of being novel (risk-aversion about anything to do with a kid? that's never happened before...); the promise of the bank was a bit of a failure since by the time you've gotten so very old that a Nobel could've been awarded you are also so old your sperm is lower quality; etc.
6[anonymous]
It might be people genuinely don't understand how heritable things like IQ are. Our culture very much tries to downplay it. Also it seems pretty obvious that only a tiny fraction of single mothers use sperm donors anyway, I would argue the majority of them ideally also want to maintain a relationship with the father or didn't plan to have a child at all.
Halfwit30

If sperm banks advertised high-IQ sperm, we would already have the beginnings of a eugenics program. If we found a way to clone eggs very cheaply, an average couple could have two children, each of whom would have half the DNA of a genius and half the DNA of one of their average parents. The advantage of this, in terms of social mobility, could be enough to avoid the need for coercive eugenics.

Regardless, I'm sure such a thing would be outlawed for various stupid reasons.

Emile130

If sperm banks advertised high-IQ sperm,

They do! (or at least, they allow you to select what kind of dregree the donor has)

Halfwit30

And remember, living in a world in which the average person is as smart as an upper-level computer programmer still isn't nearly as humbling as the fact that a well-organized cubic centimeter of carbon could be millions of times smarter than anyone.

I figure this to be a good general rule on these matters: unless you designed your own brain, you should not be proud of your own brain.

7NancyLebovitz
Do people get any points for taking good care of their brains and stocking their brains with ideas and information?
3johnlawrenceaspden
But what would I have designed my own brain with?
Halfwit00

SRI's Shakey would be justified in its dualism.