We've had these for a year, I'm sure we all know what to do by now.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
We've had these for a year, I'm sure we all know what to do by now.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
A fascinating article about rationality or the lack thereof as it applied to curing scurvy, and how hard trying to be less wrong can be: http://idlewords.com/2010/03/scott_and_scurvy.htm
Call for examples
When I posted my case study of an abuse of frequentist statistics, cupholder wrote:
Still, the main post feels to me like a sales pitch for Bayes brand chainsaws that's trying to scare me off Neyman-Pearson chainsaws by pointing out how often people using Neyman-Pearson chainsaws accidentally cut off a limb with them.
So this is a call for examples of abuse of Bayesian statistics; examples by working scientists preferred. Let’s learn how to avoid these mistakes.
How do you introduce your friends to LessWrong?
Sometimes I'll start a new relationship or friendship, and as this person becomes close to me I'll want to talk about things like rationality and transhumanism and the Singularity. This hasn't ever gone badly, as these subjects are interesting to smart people. But I think I could introduce these ideas more effectively, with a better structure, to maximize the chance that those close to me might be as interested in these topics as I am (e.g. to the point of reading or participating in OB/LW, or donating to SIAI, or attending/founding rationalist groups). It might help to present the futurist ideas in increasing order of outrageousness as described in Yudkowsky1999's future shock levels. Has anyone else had experience with introducing new people to these strange ideas, who has any thoughts or tips on that?
Edit: for futurist topics, I've sometimes begun (in new relationships) by reading and discussing science fiction short stories, particularly those relating to alien minds or the Singularity.
For rationalist topics, I have no real plan. One girl really appreciated a discussion of the effect of social status on the persuasiveness of argume...
The following stuff isn't new, but I still find it fascinating:
TL;DR: Help me go less crazy and I'll give you $100 after six months.
I'm a long-time lurker and signed up to ask this. I have a whole lot of mental issues, the worst being lack of mental energy (similar to laziness, procrastination, etc., but turned up to eleven and almost not influenced by will). Because of it, I can't pick myself up and do things I need to (like calling a shrink); I'm not sure why I can do certain things and not others. If this goes on, I won't be able to go out and buy food, let alone get a job. Or sign up for cryonics or donate to SIAI.
I've tried every trick I could bootstrap; the only one that helped was "count backwards then start", for things I can do but have trouble getting started on. I offer $100 to anyone who suggests a trick that significantly improves my life for at least six months. By "significant improvement" I mean being able to do things like going to the bank (if I can't, I won't be able to give you the money anyway), and having ways to keep myself stable or better (most likely, by seeing a therapist).
One-time tricks to do one important thing are also welcome, but I'd offer less.
I've got a weaker form of this, but I manage. The number one thing that seems to work is a tight feedback loop (as in daily) between action and reward, preferably reward by other people. That's how I was able to do OBLW. Right now I'm trying to get up to a reasonable speed on the book, and seem to be slowly ramping up.
This matches my experience very closely. One observation I'd like to add is that one of my strongest triggers for procrastination spirals is having a task repeatedly brought to my attention in a context where it's impossible to follow through on it - ie, reminders to do things from well-intentioned friends, delivered at inappropriate times. For example, if someone reminds me to get some car maintenance done, the fact that I obviously can't go do it right then means it gets mentally tagged as a wrong course of action, and then later when I really ought to do it the tag is still there.
Has anyone had any success applying rationalist principles to Major Life Decisions? I am facing one of those now, and am finding it impossible to apply rationalist ideas (maybe I'm just doing something wrong).
One problem is that I just don't have enough "evidence" to make meaningful probability estimates. Another is that I'm only weakly aware of my own utility function.
Weirdly, the most convincing argument I've contemplated so far is basically a "what would X do?" style analysis, where X is a fictional character.
It feels to me that rationalist principles are most useful in avoiding failure modes. But they're much less useful in coming up with new things you should do (as opposed to specifying things you shouldn't do).
Pigeons can solve Monty hall (MHD)?
A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy.
Behind a paywall
Behind a paywall
But freely available from one of the authors' website.
Basically, pigeons also start with a slight bias towards keeping their initial choice. However, they find it much easier to "learn to switch" than humans, even when humans are faced with a learning environment as similar as possible to that of pigeons (neutral descriptions, etc.). Not sure how interesting that is.
This was in my drafts folder but due to the lackluster performance of my latest few posts I decided it doesn't deserve to be a top level post. As such, I am making it a comment here. It also does not answer the question being asked so it probably wouldn't have made the cut even if my last few posts been voted to +20 and promoted... but whatever. :P
Perceived Change
Once, I was dealing a game of poker for some friends. After dealing some but not all of the cards I cut the deck and continued dealing. This irritated them a great deal because I altered the ord...
To venture a guess: their true objection was probably "you didn't follow the rules for dealing cards". And, to be fair to your friends, those rules were designed to defend honest players against card sharps, which makes violations Bayesian grounds to suspect you of cheating.
Why wouldn't the complaint then take the form of, "You broke the rules! Stop it!"?
Because people aren't good at telling their actual reason for disagreement. I suspect that they are aware that the particular rule is arbitrary and doesn't influence the game, and almost everybody agrees that blindly following the rules is not a good idea. So "you broke the rules" doesn't sound as a good justification. "You have influenced the outcome", on the other hand, does sound like a good justification, even if it is irrelevant.
The lottery ticked example is a valid argument, which is easily explained by attachment to random objects and which can't be explained by rule-changing heuristic. However, rule-fixing sentiments certainly exist and I am not sure which play stronger role in the poker scenario. My intuition was that the poker scenario was more akin to, say, playing tennis in non-white clothes in the old times when it was demanded, or missing the obligatory bow before the match in judo.
Now, I am not sure which of these effects is more important in the poker scenario, and moreover I don't see by which experiment we can discriminate between the explanation.
RobinZ ventured a guess that their true objection was not their stated objection; I stated it poorly, but I was offering the same hypothesis with a different true objection--that you were disrupting the flow of the game.
I'm not entirely sure if this makes sense, partially because there is no reason to disguise unhappiness with an unusual order of game play. From what you've said, your friends worked to convince you that their objection was really about which cards were being dealt, and in this instance I think we can believe them. My fallacy was probably one of projection, in that I would have objected in the same instance, but for different reasons. I was also trying to defend their point of view as much as possible, so I was trying to find a rational explanation for it.
I suspect that the real problem is related to the certainty effect. In this case, though no probabilities were altered, there was a new "what-if" introduced into the situation. Now, if they lose (or rather, when all but one of you lose) they will likely retrace the situation and think that if you hadn't cut the deck, they could have won. Which is true, of course, but irrelevant, since it also could have ...
I'm thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?
How important are 'the latest news'?
These days many people are following an enormous amount of news sources. I myself notice how skimming through my Google Reader items is increasingly time-consuming.
What is your take on it?
I wonder if there is really more to it than just curiosity and leisure. Are there news sources (blogs, the latest research, 'lesswrong'-2.0 etc.), besides lesswrong.com, that every rationalist s...
I searched for a good news filter that would inform me about the world in ways that I found to be useful and beneficial, and came up with nothing.
Any source that contained news items I categorized as useful, they made up less than 5% of the information presented by that source, and thus were drowned out and took too much time and effort, on a daily basis, to find. Thus, I mostly ignore news, except what I get indirectly through following particular communities like LessWrong or Slashdot.
However, I perform this exercise on a regular basis (perhaps once a year), clearing out feeds that have become too junk-filled, searching out new feeds, and re-evaluating feeds I did not accept last time, to refine my information access.
I find that this habit of perpetual long-term change (significant reorganization, from first principles of the involved topic or action) is highly beneficial in many aspects of my life.
ETA: My feed reader contains the following:
"Why Self-Educated Learners Often Come Up Short" http://www.scotthyoung.com/blog/2010/02/24/self-education-failings/
Quotation: "I have a theory that the most successful people in life aren’t the busiest people or the most relaxed people. They are the ones who have the greatest ability to commit to something nobody else forces them to do."
It turns out that Eliezer might not have been as wrong as he thought he was about passing on calorie restriction.
Pick some reasonable priors and use them to answer the following question.
On week 1, Grandma calls on Thursday to say she is coming over, and then comes over on Friday. On week 2, Grandma once again calls on Thursday to say she is coming over, and then comes over on Friday. On week 3, Grandma does not call on Thursday to say she is coming over. What is the probability that she will come over on Friday?
ETA: This is a problem, not a puzzle. Disclose your reasoning, and your chosen priors, and don't use ROT13.
Today I was listening in on a couple of acquaintances talking about theology. As most theological discussions do, it consisted mainly of cached Deep Wisdom. At one point — can't recall the exact context — one of them said: "…but no mortal man wants to live forever."
I said: "I do!"
He paused a moment and then said: "Hmm. Yeah, so do I."
I think that's the fastest I've ever talked someone out of wise-sounding cached pro-death beliefs.
New on arXiv:
David H. Wolpert, Gregory Benford. (2010). What does Newcomb's paradox teach us?
...In Newcomb's paradox you choose to receive either the contents of a particular closed box, or the contents of both that closed box and another one. Before you choose, a prediction algorithm deduces your choice, and fills the two boxes based on that deduction. Newcomb's paradox is that game theory appears to provide two conflicting recommendations for what choice you should make in this scenario. We analyze Newcomb's paradox using a recent extension of game theory
Warning: Your reality is out of date
tl;dr:
There are established facts that don't change perceptibly (the boiling point of water), and there are facts that change constantly (outside temperature, time of day)
Inbetween these two intuitive categories, however, a third class of facts could be defined: facts that do change measurably, or even drastically, over human lifespans, but still so slowly that people, after first learning about them, have a tendency of dumping them into the "no-change" category unless they're actively paying attention to the f...
Which very-low-effort activities are most worthwhile? By low effort, I mean about as hard as solitaire, facebook, blogs, TV, most fantasy novels, etc.
When I was young, I happened upon a book called "The New Way Things Work," by David Macaulay. It described hundreds of household objects, along with descriptions and illustrations of how they work. (Well, a nuclear power plant, and the atoms within it, aren't household objects. But I digress.) It was really interesting!
I remember seeing someone here mention that they had read a similar book as a kid, and it helped them immensely in seeing the world from a reductionist viewpoint. I was wondering if anyone else had anything to say on the matter.
I have two basic questions that I am confused about. This is probably a good place to ask them.
What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.
Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probabili
LHC to shut down for a year to address safety concerns: http://news.bbc.co.uk/2/hi/science/nature/8556621.stm
I've just finished reading Predictably Irrational by Dan Ariely.
I think most LWers would enjoy it. If you've read the sequences, you probably won't learn that many new things (though I did learn a few), but it's a good way to refresh your memory (and it probably helps memorization to see those biases approached from a different angle).
It's a bit light compared to going straight to the studies, but it's also a quick read.
Good to give as gift to friends.
Game theorists discuss one-shot Prisoner's dilemma, why people who don't know Game Theory suggest the irrational strategy of cooperating, and how to make them intuitively see that defection is the right move.
Is there a way to view an all time top page for Less Wrong? I mean a page with all of the LW articles in descending order by points, or something similar.
while not so proficient in math, I do scour arxiv on occasion, and am rewarded with gems like this, enjoy :)
"Lessons from failures to achieve what was possible in the twentieth century physics" by Vesselin Petkov http://arxiv.org/PS_cache/arxiv/pdf/1001/1001.4218v1.pdf
I have a problem with the wording of "logical rudeness". Even after having seen it many times, I reflexively parse it to mean being rude by being logical-- almost the opposite of the actual meaning.
I don't know whether I'm the only person who has this problem, but I think it's worth checking.
"Anti-logical rudeness" strikes me as a good bit better.
Thermodynamics post on my blog. Not directly related to rationality, but you might find it interesting if you liked Engines of Cognition.
Summary: molar entropy is normally expressed as Joules per Kelvin per mole, but can also be expressed, more intuitively, as bits per molecule, which shows the relationship between a molecule's properties and how much information it contains. (Contains references to two books on the topic.)
I'm considering doing a post about "the lighthouse problem" from Data Analysis: a Bayesian Tutorial, by D. S. Sivia. This is example 3 in chapter 2, pp. 31-36. It boils down to finding the center and width of a Cauchy distribution (physicists may call it Lorentzian), given a set of samples.
I can present a reasonable Bayesian handling of it -- this is nearly mechanical, but I'd really like to see a competent Frequentist attack on it first, to get a good comparison going, untainted by seeing the Bayesian approach. Does anyone have suggestions for ways to structure the post?
What programming language should I learn?
As part of my long journey towards a decent education, I assume, it is mandatory to learn computer programming.
I'm thinking about starting with Processing and Lua. What do you think?
In an amazing coincidence, many of the suggestions you get will be the suggester's current favorite language. Many of these recommendations will be esoteric or unpopular languages. These people will say you should learn language X first because of the various features language X. They'll forget that they did not learn language X first, and while language X is powerful, it might not be easy to set up a development environment. Tutorials might be lacking. Newbie support might be lacking. Etc.
Others have said this but you can't hear it enough: It is not mandatory to learn computer programming. If you force yourself, you probably won't enjoy it.
So, what language should you learn first? Well the answer is... (drumroll) it depends! Mostly, it depends on what you are trying to do. (Side note: You can get a lot of help on mailing lists or IRC if you say, "I'm trying to do X." instead of, "I'm having a problem getting feature blah blah blah to work.")
I have no particular goal in mind that demands a practical orientation. My aim is to acquire general knowledge of computer programming to be used as starting point that I can build upon.
I paused after reading this. The ...
Eh, monads are an extremely simple concept with a scary-sounding name, and not the only example of such in Haskell.
The problem is that Haskell encourages a degree of abstraction that would be absurd in most other languages, and tends to borrow mathematical terminology for those abstractions, instead of inventing arbitrary new jargon the way most other languages would.
So you end up with newcomers to Haskell trying to simultaneously:
And the final blow is that the type of programming problem that the monad abstraction so elegantly captures is almost precisely the set of problems that look simple in most other languages.
But some people stick with it anyway, until eventually something clicks and they realize just how simple the whole monad thing is. Having at that point, in the throes of comprehension, already forgotten what it was to be confused, they promptly go write yet another "monad tutorial" filled with half-baked metaphors and misleading analogies to concrete concepts, perpetuating the idea that monads are some incredibly arcane, challenging concept.
The whole circus makes for an excellent demonstration of the sort of thing Eliezer complains about in regards to explaining things being hard.
I wouldn't recommend Haskell as a first language. I'm a fan of Haskell, and the idea of learning Haskell first is certainly intriguing, but it's hard to learn, hard to wrap your head around sometimes, and the documentation is usually written for people who are at least computer science grad student level. I'm not saying it's necessarily a bad idea to start with Haskell, but I think you'd have a much easier time getting started with Python.
Python is open source, thoroughly pleasant, widely used and well-supported, and is a remarkably easy language to learn and use, without being a "training wheels" language. I would start with Python, then learn C and Lisp and Haskell. Learn those four, and you will definitely have achieved your goal of learning to program.
And above all, write code. This should go without saying, but you'd be amazed how many people think that learning to program consists mostly of learning a bunch of syntax.
I'm confused about Nick Bostrom's comment [PDF] on Robin Hanson's Great Filter idea. Roughly, it says that in a universe like ours that lacks huge intergalactic civilizations, finding fish fossils on Mars would be very bad news, because it would imply that evolving to fish phase isn't the greatest hurdle that kills most young civilizations - which makes it more likely that the greatest hurdle is still ahead of us. I think that's wrong because finding fish fossils (and nothing more) on Mars would only indicate a big hurdle right after the fish stage, but sh...
It makes the hurdle less likely to be before the fish stage, so more likely to be after the fish stage. While the biggest increase in probability is immediately after the fish stage, all subsequent stages are a more likely culprit now (especially as we could simply have missed fossils/their not have been formed for the post-fish stages).
For the "people say stupid things" file and a preliminary to a post I'm writing. There is a big college basketball tournament in New York this weekend. There are sixteen teams competing. This writer for the New York Post makes some predictions.
What is wrong with this article and how could you take advantage of the author?
Edit: Rot13 is a good idea here.
List with all the great books and videos
Recently I've read a few articles that mentioned the importance of reading the classic works, like the Feynman lectures on physics. But, where can I find those? Wouldn't it be nice if we had a central place, maybe wikipedia where you can find a list of all the great books, videolectures, web pages divided by field(physics, mathematics, computer science, economics, etc...)? So if someone wants to know what he has to read to get a good understanding of the basic knowledge of any field he will have a place to look it up...
I saw a commenter on a blog I read making what I thought was a ridiculous prediction, so I challenged him to make a bet. He accepted, and a bet has been made.
I recently finished the book Mindset by Carol S. Dweck. I'm currently rather wary of my own feelings about the book; I feel like a man with a hammer in a happy death spiral. I'd like to hear others' reactions.
The book seems to explain a lot about people's attitudes and reactions to certain situations, with what seems like unusually strong experimental support to boot. I recommend it to anyone (and I mean anyone - I've actually ordered extra copies for friends and family) but teachers, parents and people with interest in self-improvement will likely benefit...
I enjoyed this proposal for a 24-issue Superman run: http://andrewhickey.info/2010/02/09/pop-drama-superman/
There are several Less Wrongish themes in this arc: Many Worlds, ending suffering via technology, rationality:
"...a highlight of the first half of this first year will be the redemption of Lex Luthor – in a forty-page story, set in one room, with just the two of them talking, and Superman using logic to convince Luthor to turn his talents towards good..."
The effect Andrew's text had on me reminded me of how excited I was when I first had r...
Via Tyler Cowen, Max Albert has a paper critiquing Bayesian rationality.
It seems pretty shoddy to me, but I'd appreciate analysis here. The core claims seem more like word games than legitimate objections.
I have a 2000+ word brain dump on economics and technology that I'd appreciate feedback on. What would be the protocol. Should I link to it? Copy it into a comment? Start a top level article about it?
I am not promising any deep insights here, just my own synthesis of some big ideas that are out there.
Does anyone have a good reference for the evolutionary psychology of curiosity? A quick google search yielded mostly general EP references. I'm specifically interested in why curiosity is so easily satisfied in certain cases (creation myths, phlogiston, etc.). I have an idea for why this might be the case, but I'd like to review any existing literature before writing it up.
During today's RSS procrastination phase I came to Knowing the mind of God: Seven theories of everything on NewScientist.
As it reminded me of problems I have when discussing science-related topics with family et al., I got stuck on the first two paragraphs, the relevant part being:
Small wonder that Stephen Hawking famously said that such a theory would be "the ultimate triumph of human reason – for then we should know the mind of god".
But theologians needn't lose too much sleep just yet.
It reminds me of two questions I have:
Is there some way to "reclaim" comments from the posts transferred over from Overcoming Bias? I could have sworn I saw something about that, but I can't find anything by searching.
Say Omega appears to you in the middle of the street one day, and shows you a black box. Omega says there is a ball inside which is colored with a single color. You trust Omega.
He now asks you to guess the color of the ball. What should your probability distribution over colors be? He also asks for probability distributions over other things, like the weight of the ball, the size, etc. How does a Bayesian answer these questions?
Is this question easier to answer if it was your good friend X instead of Omega?
TLDR: "weighted republican meritocracy." Tries to discount the votes of people who don't know what the hell they're voting for by making them take a test and wighting the votes by the scores, but also adjusts for the fact that wealth and literacy are correlated.
Occasionally, I come up with retarded ideas. I invented two perpetual motion machines and one perpetual money machine when I was younger. Later, I learned the exact reason they wouldn't work, but at the time I thought I'll be a billionaire. I'm going through it again. The idea seems obviou...
So I'm planning a sequence on luminosity, which I defined in a Mental Crystallography footnote thus:
Introspective luminosity (or just "luminosity") is the subject of a sequence I have planned - this is a preparatory post of sorts. In a nutshell, I use it to mean the discernibility of mental states to their haver - if you're luminously happy, clap your hands.
Since I'm very attached to the word "luminosity" to describe this phenomenon, and I also noticed that people really didn't like the "crystal" metaphor from Mental Crys...
Vote this comment up if you want to revisit the issue after I've actually posted the first luminosity sequence post, to see how it's going then.
"Are you a Bayesian of a Frequentist" - video lecture by Michael Jordan
Posting issue: Just recently, I haven't been able to make comments from work (where, sadly, I have to use IE6!). Whenever I click on "reply" I just get an "error on page" message in the status bar.
At the same time this issue came up, the "recent posts", "recent comments", etc. sidebars aren't getting populated, no matter how long I wait. (Also from work only.) I see the headings for each sidebar, but not the content.
Was there some kind of change to the site recently?
Playing around with taboos, I think I might have come up with a short yet unambiguous definition of friendliness.
"A machine whose historical consequences, if compiled into a countable number of single-subject paragraphs and communicated, one paragraph at a time, to any human randomly selected from those alive at any time prior to the machine's activation, would cause that human's response (on a numerical scale representing approval or disapproval of the described events) to approach complete approval (as a limit) as the number of paragraphs thus commu...
LHC shuts down again; anthropic theorists begin calculating exactly how many decibels of evidence they need...
Since people expressed such interest in piracetam & modafinil, here's another personal experiment with fish oil. The statistics is a bit interesting as well, maybe.
I'll be in London on April 4th and very interested in meeting any Less Wrongers who might be in the area that day. If there's a traditional LW London meetup venue, remind me what it is; if not, someone who knows the city suggest one and I'll be there. On an unrelated note, sorry I've been and will continue to be too busy/akratic to do anything more than reply to a couple of my PMs recently.
Does P(B|A) > P(B) imply P(~B|~A) > P(~B)?
ETA: Assume all probabilities are positive.
The Final Now, a new short story by Gregory Benford about (literally) End Times.
Quotation in rot13 for the spoiler-averse's sake. It's an interesting passage and, as FAWS, I also think it's not that revealing, so it's probably safe to read it in advance.
("Bar" vf n cbfg-uhzna fgnaq-va sbe uhznavgl juvpu nqqerffrf n qrzvhetr ragvgl, qrfvtangrq nf "Ur" naq "Fur".)
"Bar synerq jvgu ntvgngrq raretvrf. “Vs lbh unq qrfvtarq gur havirefr gb er-pbyyncfr, gurer pbhyq unir orra vasvavgr fvzhyngrq nsgreyvsr. Gur nfxrj pbzcerffvba pbh...
Re: Cognitive differences
When you try to mentally visualize an image, for example a face, can you keep it constant indefinitely?
( For me visualisation seems to always entail flashing an image, I'd say for less than 0.2 seconds total. If I want to keep visualizing the image I can flash it again and again in rapid succession so that it appears almost seamless, but that takes effort and after at most a few seconds it will be replaced by a different but usually related image. )
If yes, would you describe yourself as a visual thinker? Are you good at drawing? Good at remembering faces?
(No, so so, no)
I'm drafting a post to build on (and beyond) some of the themes raised by Seth Godin's quote on jobs and the ensuing discussion.
I'm likely to explore the topic of "compartmentalization". But damn, is that word ugly!
Is there an acceptable substitute?
I am curious as to why brazil84's comment has received so much karma? The way the questions were asked seemed to imply a preconception that there could not possibly be viable alternatives. Maybe it's just because I'm not a native English speaker and read something into it that isn't there, but that doesn't seem to me to be a rationalist mindset. It seemed more like »sarcasm as stop word« instead of an honest inquiry let alone an argument.
Suppose you're a hacker, and there's some information you want to access. The information is encrypted using a public key scheme (anyone can access the key that encrypts, only one person can access the key that decrypts), but the encryption is of poor quality. Given the encryption key, you can use your laptop to find the corresponding decryption key in about a month of computation.
Through previous hacking, you've found out how the encryption machine works. It has two keys, A and B already generated, and you have access to the encryption keys. However, neit...
Thoughts about intelligence.
My hope is that some altruistic person will read this comment, see where I am wrong and point me to the literature I need to read. Thanks in advance.
I've been thinking about the problem of general intelligence. Before going too deeply I wanted to see if I had a handle on what intelligence is period.
It seems to me that the people sitting in the library with me now are intelligent and that my pencil is not. So what is the minimum my pencil would have to do before I suddenly thought that it was intelligent?
Moving alone doesn't cou...
Does anyone here know about interfacing to the world (and mathematics) in the context of a severely limiting physical disability? My questions are along the lines of: what applications are good (not buggy) to use and what are the main challenges and considerations a person of normal abilities would misjudge or not be aware of? Thanks in advance!
People constantly ignore my good advice by contributing to the American Heart Association, the American Cancer Society, CARE, and public radio all in the same year--as if they were thinking, "OK, I think I've pretty much wrapped up the problem of heart disease; now let's see what I can do about cancer."
Agreed - I do basically similar things for free, and am reasonably confident that my reaction would be "*shrug* ok" if I were to work with MixedNuts and xe wanted to pay me.
(I do intend to offer help here; I'm still trying to determine what the most useful offer would be.)