Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Paper first, article next
Yup, thrust estimates the same 1.2 millinewtons per kw, in vac.
Hypothesized to be pushing off the quantum foam.
"[The] supporting physics model used to derive a force based on operating conditions in the test article can be categorised as a nonlocal hidden-variable theory, or pilot-wave theory for short."
Pilot-wave theory is a slightly controversial interpretation of quantum mechanics.
It's pretty complicated stuff, but basically the currently accepted Copenhagen interpretation of quantum mechanics states that particles do not have defined locations until they are observed.
Pilot-wave theory, on the other hand, suggests that particles do have precise positions at all times, but in order for this to be the case, the world must also be strange in other ways – which is why many physicists have dismissed the idea.
But in recent years, the pilot-wave theory has been increasing in popularity, and the NASA team suggests that it could help explain how the EM Drive produces thrust without appearing to propel anything in the other direction.
"If a medium is capable of supporting acoustic oscillations, this means that the internal constituents were capable of interacting and exchanging momentum," the team writes.
Pilot wave theory
Recently I have found myself encouraging people to cultivate the desire to X.
Examples that you might want to cultivate interest in include:
- Organise ones self
- Plan for the future
- be a goal-oriented thinker
- build the tools
- Anything else in the list of common human goals
- Getting healthy sleep
- Being less wrong
- Trusting people more
- Trusting people less
- interest in a topic (cars, fashion, psychology etc.)
Why do we need to cultivate?
We don't. But sometimes we can't just "do". Lot's of reasons are reasonable reasons to not be able to just "do" the thing:
- Some things are scary
- Some things need planning
- Some things need research
- Some things are hard
- Some things are a leap of faith
- Some things can be frustrating to accept
- Some things seem stupid (well if exercising is so great why don't I automatically want to do it)
- Other excuses exist.
On some level you have decided you want to do X; on some other level you have not yet committed to doing it. Easy tasks can get done quickly. More complicated tasks are not so easy to do right away.
Well if it were easy enough to just successfully do the thing - you can go ahead and do the thing (TTYL flying to the moon tomorrow - yea nope.).
- your system 1 wants to do the thing and your system 2 is not sure how.
- your system 2 wants to do the thing and your system 1 is not sure it wants to do the thing.
- The healthy part of you wants to diet; the social part of you is worried about the impact on your social life.
(now borrowing from Common human goals)
- Your desire to live forever wants you to take a medication every morning to increase your longevity; your desire for freedom does not want to be tied down to a bottle of pills every morning.
- Your desire for a legacy wants you to stay late at work; your desire for quality family time wants you to leave the office early.
The solution is to cultivate the interest; or the desire to do the thing. From the initial point of interest or desire - you can move forward; do some research to either convince your system 2 of the benefits, or work out how to do the thing to convince your system 1 that it is possible/viable/easy enough. Or maybe after some research the thing seems impossible. I offer Cultivating the desire as a step along the way to working it out.
Short post for today; Cultivate the desire to do X.
Meta: time to write 1.5 hours.
My table of contents contains my other writing
In the Muehlhauser-Hibbard Dialogue on AGI, Hibbard states it will be "impossible to decelerate AI capabilities" but Luke counters with "Persuade key AGI researchers of the importance of safety ... If we can change the minds of a few key AGI scientists, it may be that key insights into AGI are delayed by years or decades." and before I read that dialogue, I had come up with three additional ideas on Heading off a near-term AGI arms race. Bill Hibbard may be right that "any effort expended on that goal could be better applied to the political and technical problems of AI safety" but I doubt he's right that it's impossible.
How do you prove something is impossible? You might prove that a specific METHOD of getting to the goal does not work, but that doesn't mean there's not another method. You might prove that all the methods you know about do not work. That doesn't prove there's not some other option you don't see. "I don't see an option, therefore it's impossible." is only an appeal to ignorance. It's a common one but it's incorrect reasoning regardless. Think about it. Can you think of a way to prove that a method that does work isn't out there waiting to be discovered without saying the equivalent of "I don't see any evidence for this." We can say "I don't see it, I don't see it, I don't see it!" all day long.
I say: "Then Look!"
How often do we push past this feeling to keep thinking of ideas that might work? For many, the answer is "never" or "only if it's needed". The sense that something is impossible is subjective and fallible. If we don't have a way of proving something is impossible, but yet believe it to be impossible anyway, this is a belief. What distinguishes this from bias?
I think it's a common fear that you may waste your entire life on doing something that is, in fact, impossible. This is valid, but it's completely missing the obvious: As soon as you think of a plan to do the impossible, you'll be able to guess whether it will work. The hard part is THINKING of a plan to do the impossible. I'm suggesting that if we put our heads together, we can think of a plan to make an impossible thing into a possible one. Not only that, I think we're capable of doing this on a worthwhile topic. An idea that's not only going to benefit humanity, but is a good enough idea that the amount of time and effort and risk required to accomplish the task is worth it.
Here's how I am going to proceed:
Step 1: Come up with a bunch of impossible project ideas.
Step 2: Figure out which one appeals to the most people.
Step 3: Invent the methodology by which we are going to accomplish said project.
Step 4: Improve the method as needed until we're convinced it's likely to work.
Step 5: Get the project done.
Impossible Project Ideas
- Decelerate AI Capabilities Research: If we develop AI before we've figured out the political and technical safety measures, we could have a disaster. Luke's Ideas (Starts with "Persuade key AGI researchers of the importance of safety"). My ideas.
- Solve Violent Crime: Testosterone may be the root cause of the vast majority of violent crime, but there are obstacles in treating it.
- Syntax/static Analysis Checker for Laws: Automatically look for conflicting/inconsistent definitions, logical conflicts, and other possible problems or ambiguities.
- Rational Agreement Software: If rationalists should ideally always agree, why not make an organized information resource designed to get us all to agree? This would track the arguments for and against ideas in such a way where each piece can be verified logically and challenged, make the entire collection of arguments available in an organized manner where none are repeated and no useless information is included, and it would need to be such that anybody can edit it like a wiki, resulting in the most rational outcome being displayed prominently at the top. This is especially hard because it would be our responsibility to make something SO good, it convinces one another to agree, and it would have to be structured well enough that we actually manage to distinguish between opinions and facts. Also, Gwern mentions in a post about critical thinking that argument maps increase critical thinking skills.
- Discover unrecognized bias: This is especially hard since we'll be using our biased brains to try and detect it. We'd have to hack our own way of imagining around the corners, peeking behind our own minds.
- Logic checking AI: Build an AI that checks your logic for logical fallacies and other methods of poor reasoning.
Add your own ideas below (one idea per comment, so we can vote them up and down), make sure to describe your vision, then I'll list them here.
Figure out which one appeals to the most people.
Assuming each idea is put into a separate comment, we can vote them up or down. If they begin with the word "Idea" I'll be able to find them and put them on the list. If your idea is getting enough attention obviously, it will at some point make sense to create a new discussion for it.
Tim Ferriss has been systematically quoted on Less Wrong.
How to make money to donate utilons and show you care is a persistent topic on Less Wrong.
No one here seems to either have tried, or accessed the feasibility of Tim Ferrissing life (for instance accessing by checking for people who tried without the obvious survivor bias displayed in Ferriss`s own website)
A probability 30% of earning $12.000 per month working for 10 hours per week after a build-up time of 4 months working 10 hours a day to get it started (having fun while figuring out how does capitalism work anyway) seems like a fair bet.
My prior for the above paragraph feasibility is about 15%.
Should my posterior be above the 30% threshold?
Different prior anyone?
Lone bystander bias,everyone?