Comment author: loqi 21 April 2011 07:40:25AM *  2 points [-]

It's funny you say that, I once figured out a problem for someone by diagnosing an error message with C++ templates. Wizardry! However, the "base" of the error message looked roughly like

error: unknown type "boost::python::specify_a_return_value_policy_to_wrap_functions_returning<Foo>"

Cryptic, right? It turns out he needed to specify a return value policy in order to wrap a function returning Foo. All I did for him was scan past junk visually looking for anything readable or the word "error".

In response to comment by loqi on Learned Blankness
Comment author: TeMPOraL 22 April 2013 01:27:17PM 1 point [-]

That's the general algorithm of reading STL error messages. I still can't get why people look at you as if you were a wizard, if all that you need to do is to quickly filter out irrelevant 90% of the message. Simple pattern matching exercise.

In response to comment by [deleted] on Learned Blankness
Comment author: handoflixue 27 April 2011 12:05:35AM 0 points [-]

I usually start with Google :)

Comment author: TeMPOraL 22 April 2013 01:17:22PM 1 point [-]

I delay Google'ing to the last possible moment on purpose. It's by figuring out stuff by yourself that you really learn :).

In response to Learned Blankness
Comment author: childofbaud 20 April 2011 08:48:31PM 6 points [-]

I have observed similar behavior in others. Only I called it 'blackboxing', for lack of a better word. I think this might actually be a slightly better term than 'learned blankness', so I hereby submit it for consideration. It's borrowed from the software engineering idea of a black box abstraction.

People tend to create conceptual black boxes around certain processes, which they are remarkably reluctant to look within and explore, even when something does go wrong. This is what seems to have happened with the dishwasher incident. The dishwasher was treated as a black box. Its input was dirty dishes, its output was clean ones. When it malfunctioned, it was hard to see it as anything else. The black box was broken.

Of course, engineers and programmers often go out of their way to design highly opaque black boxes, so it's not surprising that we fall victim to this behavior. This is often said to be done in the name of simplicity (the 'user' is treated as an inept, lazy moron), but I think an additional, more surreptitious reason, is to keep profit margins high. Throwing out a broken dishwasher and buying a new one is far more profitable to a manufacturer than making it easy for the users to pick it apart and fix it themselves.

The open source movement is one of the few prominent exceptions to this that I know of.

Comment author: TeMPOraL 22 April 2013 01:16:06PM 3 points [-]

This is often said to be done in the name of simplicity (the 'user' is treated as an inept, lazy moron), but I think an additional, more surreptitious reason, is to keep profit margins high.

There's also one much more important reason. To quote A. Whitehead,

Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.

Humans (right now) just don't have enough cognitive power to understand every technology in detail. If not for the black boxes, one couldn't get anything done today.

The real issue is, whether we're willing to peek inside the box when it misbehaves.

Comment author: Robin_Hanson2 16 March 2007 08:07:40PM 9 points [-]

When people know that candy bars can be too tempting, they can prefer to work in places without candy bar machines. Similarly, people who find ice cream or cookie stands too tempting can stay away from shopping malls that allow such stands. People who find the sight of naked people too tempting can choose to work and shop in places that do not allow people to walk around naked. Economists have worked out models of many of these situations, and the keep coming back to the conclusion that giving people mechanisms of self-control is good enough, unless people are biased to to underestimate their self-control problems. And so recommendations for more self-control regulation tend to be based on claims that we are biased to underestimate our problem.

Comment author: TeMPOraL 20 April 2013 08:16:38PM *  4 points [-]

And so recommendations for more self-control regulation tend to be based on claims that we are biased to underestimate our problem.

There is something to those claims given that pretty much every addiction therapy (be it alcohol, food, porn or something else) starts from admitting to oneself that one has underestimated the problem.

Comment author: TeMPOraL 27 March 2013 11:04:42AM 5 points [-]

That's something that I think laypeople never realize about computer science - it's all really simple things, but combined together at such a scale and pace that in a few decades we've done the equivalent of building a cat from scratch out of DNA. Big complex things really can be built out of extremely simple parts, and we're doing it all the time, but for a lot of people our technology is indistinguishable from magic.

-- wtallis

Comment author: wedrifid 18 February 2012 10:09:31PM 4 points [-]

That is brilliant, I'm taking that one. It's refreshing to see an alternative to the typical belligerently optimistic 'motivational' quotes that deny the rather significant influence of chance.

Comment author: TeMPOraL 13 March 2013 06:16:13PM *  1 point [-]

Well, but it can also be interpreted as a recursive definition expanding to:

Luck is opportunity plus preparation plus opportunity plus preparation plus opportunity plus preparation plus .... ;).

Comment author: novalis 26 February 2013 04:03:54AM 4 points [-]

The Boy Who Cried Wolf is a pretty good example of updating on new information, I guess.

But it seems sort of pointless to attempt to find old stories that show the superiority of a supposedly new way of thinking. If the way of thinking is so new, then why should we expect to find stories about it? And if we do, what does that say about the superiority of the method (that is, that it was known N years ago but didn't take over the world)? Perhaps this is too cynical?

Comment author: TeMPOraL 26 February 2013 10:19:03AM *  2 points [-]

If the way of thinking is so new, then why should we expect to find stories about it?

To quote from the guy this story was about, "there is nothing new under the sun". At least nothing directly related to our wetware. So we should expect that every now and then people stumbled upon a "good way of thinking", and when they did, the results were good. They might just not manage to identify what exactly made the method good, and to replicate it.

Also, as MaoShan said, this is kind of Proto-Bayes, 101 thinking. What we now have is this, but systematically improved over many iterations.

(that is, that it was known N years ago but didn't take over the world)?

"Taking over the world" is a complex mix of effectiveness, popularity, luck and cultular factors. You can see this a lot in the domain of programming languages. With ways of thinking it is even more difficult, because - as opposed to programming languages - most people don't learn them explicitly and don't evaluate them based on results/"features".

Comment author: TeMPOraL 04 December 2012 09:25:38AM *  5 points [-]

I like doing math that involves measuring the lengths of numbers written out on the page—which is really just a way of loosely estimating log_10 x. It works, but it feels so wrong.

Comment author: TeMPOraL 02 December 2012 07:02:50PM *  25 points [-]

It has been said that the past is a foreign country. Well, it is certainly inhabited by foreigners, people whose mindset was shaped by circumstances we shy from remembering. The mother of three children who gave birth eight times. The father of four children, the last of whom cost him his wife. Our minds are largely free of such horrors, and not inured to that kind of suffering. That is the progress of technology. That is what is improving the human race.

It is a long, long ladder, and sometimes we slip, but we've never actually fallen. That is our progress.

In response to Causal Universes
Comment author: Kaj_Sotala 28 November 2012 10:47:58AM *  14 points [-]

Sometimes I still marvel about how in most time-travel stories nobody thinks of this.

The alternate way of computing this is to not actually discard the future, but to split it off to a separate timeline so that you now have two simulations: one that proceeds normally aside for the time-traveler having disappeared from the world, and one that's been restarted from an earlier date with the addition of the time traveler. Of course, this has its own moral dilemmas as well - such as the fact that you're as good as dead for your loved ones in the timeline that you just left - but generally smaller than erasing a universe entirely.

Comment author: TeMPOraL 28 November 2012 09:55:48PM *  2 points [-]

Sometimes I still marvel about how in most time-travel stories nobody thinks of this.

The alternate way of computing this is to not actually discard the future, but to split it off to a separate timeline

Or maybe also another one, somewhat related to the main post - let the universe compute, in it's own meta-time, a fixed point [0] of reality (that is, the whole of time between the start and the destination of time travel gets recomputed into a form that allowed it to be internally consistent) and continue from there. You could imagine the universe computer simulating casually the same period of time again and again until a fixed point is reached, just like the iterative algorithms used to find it for functions.

[0] - http://en.wikipedia.org/wiki/Fixed_point_(mathematics)

View more: Prev | Next