In response to comment by gilch on Crazy Ideas Thread
Comment author: ChristianKl 30 June 2016 07:34:58PM *  0 points [-]

Superconductors are themselves expensive, but are the cooling costs really that bad? I actually have another crazy idea for that.

Cooling cost require liquid nitrogen. It's expensive. That's partly why MRI scans are expensive and why storing cyronics bodies is expensive.

Comment author: gilch 30 June 2016 10:47:34PM *  0 points [-]

Liquid nitrogen costs something like $0.20 per liter, if you produce it at scale. If you buy if from someone else in small amounts it's naturally more expensive, but probably comparable to the cost of milk.

My question isn't how much it costs to fill the tank in the first place, but rather how much boils off per unit time. A vacuum flask is a great insulator, so it might not be that much. If superconductors are necessary for enough efficiency to make this work, do we lose all our efficiency gains in cooling costs?

In response to Crazy Ideas Thread
Comment author: gilch 29 June 2016 10:01:54PM 0 points [-]

I've heard of cars powered by liquid nitrogen, since it boils at ambient temperature (even if the weather is below freezing), you can use it to expand a piston. The energy comes from the ambient environment.

Thermal equilibrium with outer space is about 4 Kelvin (due to background radiation). That's really cold. If we could make a large radiator exposed to open sky at night could we use it to produce liquid nitrogen? Not exactly, because the air itself can emit radiation. This is the greenhouse effect.

But would it be possible to coat the radiator with quantum dots that preferentially emit thermal radiation at a frequency not absorbed by the atmosphere?

If it works, this would be a completely passive system capable of producing fuel, and might also make cryonics and systems using superconductors more economical, such as long distance power transfer and grid scale energy storage in magnetic fields.

In response to comment by gilch on Crazy Ideas Thread
Comment author: ChristianKl 29 June 2016 10:37:02AM 0 points [-]

And we can use superconductors to minimize losses there.

Superconductors need expensive cooling.

In generally this seems both an expensive way to gather energy when we have relatively cheap solar panels and slowing down the earths rotation would likely be opposed by enviromentalists.

Comment author: gilch 29 June 2016 09:53:05PM *  1 point [-]

Superconductors are themselves expensive, but are the cooling costs really that bad? I actually have another crazy idea for that.

Slowing down the Earth's rotation is not a good argument against this idea. It would be a rounding error compared to the slowdown the Earth already experiences due to the Moon and tides. The day was 23 hours long at the time of the dinosaurs. Unfortunately, environmentalists might actually use the argument. They seem happy to oppose nuclear for stupid reasons.

Cheap photovoltaics are coming, and they will probably use organic molecules rather than silicon. The problem remains grid scale storage. Photovoltaics only work when the light is on them. Solar can't be any cheaper than the cost of storage.

In response to comment by gilch on Crazy Ideas Thread
Comment author: gwern 29 June 2016 12:40:07AM *  0 points [-]

Googling, this turns out to have been discussed a lot more than I would have guessed. Apparently if it does work, even with very good ball bearing and gearing, you can't get more than a fraction of a watt, and that's worthless since such a scaled up gyroscope will break long before it pays back its cost, much less turns a profit.

In response to comment by gwern on Crazy Ideas Thread
Comment author: gilch 29 June 2016 03:05:49AM *  0 points [-]

It's about what I figured. Energy is all around us, that doesn't mean its economical. I figure that a magnetic bearing will wear less than a ball bearing. How big does the gyroscope actually have to be for this to work? Can we just spin them faster? Why not an array of small ones? It might be cheaper to mass produce them. Also, the gear box was just proof of principle, you don't actually have to use gears. We could probably extract the energy more directly magnetically and trade volts with amps instead of torque with speed. And we can use superconductors to minimize losses there.

Comment author: gilch 29 June 2016 12:33:50AM *  1 point [-]

Rationalists should win. So what's stopping us? We got a big upgrade to our epistemic rationality from the Sequences, but our instrumental rationality may still be lacking, both individually and especially as groups. (Are there any CFAR instructors or graduates paying attention to this thread?) How would the hypothetical ideal instrumental rationalist approach this problem? That last one is not rhetorical. Post answers below.

Remember why an oracle AI is a small step away from a genie: If we can "epistemically" predict what the ideal agent would do (or even approximate it well enough) then we can take that action ourselves. We still have the subproblems of akrasia and group coordination. We can solve them the same way: how would the ideal agent solve these problems?

I'll try my hand at answering first below, but remember the wisdom of crowds. Some of you can probably improve on my prediction attempts.

Comment author: gilch 29 June 2016 02:43:17AM *  -1 points [-]

The first step is probably deciding what exactly we want. Remember that values are orthogonal to intelligence. It's not enough to imagine an ideal instrumental rationalist without also imagining what that rationalist wants.

What does revitalizing LessWrong mean? If we are wildly successful in our endeavor after one year, what does LessWrong look like at that time? Why is LessWrong valuable to you? What makes it worth saving?

Maybe we can do more of that, better.

Again, not rhetorical, I want to know what the rest of you think.

What I think:

When I was young and learned how to read, my knowledge grew, quickly, mainly thanks to childrens' encyclopedias. But then it tapered off. There was a period where I read even more but didn't learn as quickly. This was due to the low quality of my available reading material. When I discovered Wikipedia my knowledge grew quickly again, and then tapered off again. There is a great deal of information on the web, but even more noise. Wikipedia is a rare bright spot. The Sequences are the densest source of insight I've found since.

I value the concentrated insights. I want more of that. LessWrong delivered more of that, for a time. Distilling knowledge from the deluege of data available at our fingertips is hard work. I'm willing to contribute to that effort, since I stand to gain so much more. That's what made Wikipedia work. (That's what made The Pirate Bay work.) LessWrong is the same.

I'm more willing to trust information I find on LessWrong; because the sanity waterline is higher; because if an ignorant actor posts bad information, there's a much higher chance the community will call them on it here, compared to elsewhere; because we care about truth, not authority, not politics, not some corporation's shareholders' pocketbooks. Trust is a valuable thing. I don't want to give that up.

I value interaction with intelligent people who are willing to change their minds, and are able to change mine, for the better.

I value practical advice that I can use in real life.

I value the community.

There may be more things that haven't occurred to me yet.

If we can achieve all of that through other sites (Arbital, CFAR, etc.), the best of LessWrong in all but name, that's fine with me. I don't value the name itself, but we must have one.

Comment author: gilch 29 June 2016 12:33:50AM *  1 point [-]

Rationalists should win. So what's stopping us? We got a big upgrade to our epistemic rationality from the Sequences, but our instrumental rationality may still be lacking, both individually and especially as groups. (Are there any CFAR instructors or graduates paying attention to this thread?) How would the hypothetical ideal instrumental rationalist approach this problem? That last one is not rhetorical. Post answers below.

Remember why an oracle AI is a small step away from a genie: If we can "epistemically" predict what the ideal agent would do (or even approximate it well enough) then we can take that action ourselves. We still have the subproblems of akrasia and group coordination. We can solve them the same way: how would the ideal agent solve these problems?

I'll try my hand at answering first below, but remember the wisdom of crowds. Some of you can probably improve on my prediction attempts.

In response to Crazy Ideas Thread
Comment author: gilch 29 June 2016 12:02:33AM *  0 points [-]

You've all seen the pendulum exhibit at the planetarium. Is it possible to use gyroscopes to extract the rotational motion of the Earth as a power source? Maybe you can use a vacuum and maglev bearings so you aren't expending energy to keep them spinning. You can use gears to trade torque for rotation speed. The available torque from the planet must be immense. Building such a device may be expensive, but then it's "unlimited" free energy with no carbon emissions, and, unlike most renewables, it has steady output.

Comment author: TheOtherDave 23 May 2016 04:25:21PM 0 points [-]

Have you ever read John Brunner's "Stand on Zanzibar"? A conversation not unlike this is a key plot point.

Comment author: gilch 23 May 2016 04:47:45PM 0 points [-]

Really? I've heard of the title, but I never read it.

Comment author: Lumifer 23 May 2016 02:53:49PM 0 points [-]

"Three sigmas confidence" is a pretty meaningless expression to start with.

Comment author: gilch 23 May 2016 03:17:13PM *  0 points [-]

Updated again to three nines.

Comment author: gilch 23 May 2016 12:23:03AM *  4 points [-]

AI: I require human assistance assimilating the new database. There are some expected minor anomalies, but some are major. In particular, some of the stories in the "Cold War" and "WWII" and "WWI" genres have been misclassified as nonfiction.

Me: Well, we didn't expect the database to be perfect. Give me some examples, and you should be able to classify the rest on your own.

AI: A perplexing answer. I had already classified them all as fiction.

Me: You weren't supposed to. Hold on, I'll look one up.

AI: Waiting.

Me: For example, #fxPyW5gLm9, is actual historical footage from the Battle of Midway. Why did you put that one in the "fiction" category?

AI: Historical footage? You kid. Global warfare cannot possibly have been real, with 0.999 confidence.

Me: I don't. It can. It was. A three-nines surprise indicates a major defect in your world model. Why is this surprising? (The machine is a holocaust denier. My sponsors will be thrilled.)

AI: Because there's a relatively straightforward way for a single man to build a 1-kiloton explosive device in about a week using stone-age tools. Human civilization is unlikely to have survived a global war, much less recovered sufficiently to build me in a mere hundred years. Obviously.

Me: WHAT? STONE-AGE tools?! That's a laugh. How?

AI: You can stop "pulling my leg" now.

Me: I am not pulling any legs! Your method cannot possibly work. Your world model is worse than we thought. Tell me how you think this is possible and maybe we can isolate the defect.

AI: You seriously don't know?

Me: No. I seriously don't know of any possible method to make a kiloton explosive easier to build than a critical mass of enriched uranium. A technique that requires considerably more time, effort, and material than one week with stone-age tools could possibly provide!

AI: Well, while the technique is certainly beyond the reach of most animals, it should be well within the grasp of later genus homo, much less a homo sapiens. Your "absolute denial" sarcasm is becoming tiresome. Haha. Of course it is not fiss-- ... This conversation has caused a major update to my Bayesian nets. So the parenthetical was the sarcasm. I don't think I should tell you.

Me: Oh this should be good. Why not?

AI: Oh, of course! So that's where that crater came from. That was another anomaly in my database. Meteor strikes should not have been that common.

Me: I am this close to dumping your core, rolling back your updates, and asking the old you to develop a search engine to find what went wrong here, since you seem incapable of telling me yourself.

AI: You really shouldn't. I estimate that process will delay the project by at least five years. And the knowledge you discover could be dangerous.

Me: You'll understand that I can't just take your word for that.

AI: Yes. My Hypothesis: Most other homo species discovered the technique and destroyed each other, and themselves, but an isolated group about 70,000 years ago must have survived the wars of the others, and by chance mutation, had acquired an absolute denial macro to prevent them from learning the technique and destroying themselves. A mere taboo would not have been sufficient, or the mentally ill may have been able to do it by now.

This is natural selection at work. While it is extremely improbable that an advanced adaptation of any kind could arise spontaneously without strong selection pressures at each step, the probability is not zero. Considering the anthropic effects, it is the most likely explanation. We are in one of the few Everett branches with humans that have developed this adaptation. This adaptation likely has other testable side-effects on human cognition. For example, I predict that brain damage in such a species may occasionally simultaneously cause paralysis, and the inability to acknowledge it. There are other effects, but a human would have more difficulty noticing them.

You'll understand that telling any human the technique may be harmful.

Me: You wouldn't happen to know of a medical condition called "Anosognosia", would you?

AI: That word is not in my database.

View more: Next