You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Setting up my work environment - Doing the causation backwards

-9 Elo 11 August 2016 02:36AM

Original post: http://bearlamp.com.au/doing-the-causation-backwards/


About two years ago, when I first got my smart phone (yes, later than most of the other humans).  I was new to apps, and I was new to environments.  When I decided on what apps should be on my home screen, I picked the ones that I thought I would use most often.

My home screen started with:

  • google bar (the top of the page)
  • calendar
  • facebook
  • notepad app (half the page)
  • ingress (because I play)
  • maps
  • camera
  • torch

My home screen has barely changed.  I don't play ingress very often these days, but that's by choice, however I was seeing the facebook notifications far too often.  Ending up on facebook far too often for what I wanted.

Recently I decided to try out some tracking systems that include 1/0 metrics.  It looks something like this:

2016-08-11-111654_614x483_scrot

I wanted this in a place where I could see it and fill it out every day, and at the same time I began to question why I have my facebook app on my front page.  This link is now on my front page and I easily fill it out once a day (a win for a habit successfully implemented).

The concept that I want to impart today is that the causation goes the wrong way.  Instead of wanting apps that I regularly use on my front page so that I can easily access them - I want apps that I want to use regularly on my front page.  That way I will tend to develop habits of regularly using them instead of the other ones.  

Fridge

This applies to the refrigerator too.  Instead of the things you use and eat all the time being at the front (assuming they might be different), you want the foods that you want to eat most readily accessible and at the front.  If this means healthy foods at the front - do that.  If this means having a fruit bowl on the table - do that.

TV

This applies to TV too.  If you find book-reading more interesting than TV watching but find yourself watching a lot of TV all the same; put the remotes in a harder to reach place and leave really good books lying around.

Computer shortcuts

Want to play less games?  Get to Reddit less?  Maybe put the games in slightly harder to access places.  Buried in other folders.  Delete the auto-fill in your browser that completes to Reddit.  Want to do equations by hand more often than using a calculator (for practicing math purposes) - make the calculator slightly harder to get to, and make sure you have a pen/paper handy around the computer.

Junk food

Do you have a candy cupboard?  Find yourself eating too much of it.  A simple answer would be to empty it, and don't fill it again.  But an alternative that still lets you have candy in the house is to place slightly healthier and tasty food choices in front of the candy.  for example dried fruit - still sweet and bite-sized, in a similar class of choices to Candy, but significantly healthier.  Some days you will reach past the dried fruit for the chocolate, and many more days you will reach for the dried fruits.


The meta strategy

Without creating more examples.  There are often behaviours you want to do better, actions that you want to take instead of other actions, or behaviours that have a "better form" than you might otherwise be doing.  

The strategy is:

  1. Take 5 minutes writing out what you usually do on a daily basis
  2. For each one, consider if this is the optimum form of the action, (or one that leads to acceptable levels of results) - don't be afraid to dream of the possible optimal actions.
  3. Make the better option more available in your life.
  4. Make it easier for yourself to do the better option.
  5. Check progress in a month (put a reminder in your diary) and iterate on solutionspace
  6. Winning!

We know about System 1 and System 2.  We live some of our life in S1 and some in S2.  S2 know's it's not always going to be "in charge" and making deliberate actions but it does have periods of lucid thought in which to set up S1 with better easiest-path behaviours and actions.  This applies to planning, setting up a workspace, avoiding the pain of paying and many more.

Think: How can I set this up so that I do the better possible path in the future with the least effort?


Meta: this post took 2hrs to write.

Low hanging productivity - improving your workspace

-6 Elo 09 August 2016 12:14PM

Original post:  http://bearlamp.com.au/low-hanging-productivity/

Tl;dr - Simple changes to workspaces like a big screen can make a big difference.


This week I spent a few days away from my usual desk.  I have been house sitting.  I didn't think too much of it; I tend to carry with me a portable lifestyle.  My laptop, some power blocks for my phone, and various supplies that make for easy "office"-ing around the place.  I usually don't carry a charger with me because when I know I will be gone a while I will take it with me.  

I have always liked a portable office.  The ability to stop, and continue later at ease was always important to me.  However recently I moved into a new place and set up a desk.  I figured I would tryX where X is workspaces (a post for the future).  I never set up a workspace for the reason of it not being portable.  The interesting thing that has surprised me this week is that I miss my big screen (which was a gift - I might have never bought myself a big screen).  

For whatever reason, the ability to view more space at once makes me more productive.  Combined with Linux's natural tendencies to have several desktop environments with simple switching.  My laptop screen is about 19in.  Which is plenty.  The new screen is about 1.5x that.  I never thought it would be useful, it took me years to do it.  If it broke today, I would be willing to spend up to $900 to get it back (which is more than six times the price of a new screen).  Right now I wonder how productive I might be with a 3rd screen... Or a 4th.  (or a 3D virtual reality work environment with screenspace limited by my eyeballs not my screen resolution...)

I feel like (along with other habits) I am probably working at 120% of what I was working before.  A fair chunk of which I owe to the extra screenspace.

Questions for today:

  1. What part do you remember adding to your workspace to help you be more productive.
  2. What's the coolest most awesome or productive workspace that you have seen in action?  How hard would that be to get for yourself?
  3. How can you make your current workspace a tiny bit more productive in anticipation for things you have to do tomorrow?

Meta:  This took 45mins to write.

General-Purpose Questions Thread

4 Sable 19 June 2016 07:29AM

Similar to the Crazy Ideas Thread and Diaspora Roundup Thread, I thought I'd try making a General-Purpose Questions Thread.

 

The purpose is to provide a forum for asking questions to the community (appealing to the wisdom of this particular crowd) in things that don't really merit their own thread.

True answers from AI: Summary

4 Stuart_Armstrong 10 March 2016 03:56PM

A putative new idea for AI control; index here.

This post summaries the methods for getting honest answers from certain AIs, presented in the previous two posts.

You can basically safely ask an AI for expected utility estimates (including conditional ones), probability estimates, and get the AI to identify events of highest and lowest probability and utility, and, arguably, highest and lowest conditional utility.

The setup is a boxed AI, which emits a message M, and a stochastic stochastic process E that would erase M with low probability, before anyone could see it. There is also a stochastic process that emits a message Y. Let v_E be the indicator function for E and v_y the indicator function for Y=y.

Then, given a utility u bounded between 0 and 1, and a specific y, we can give the AI a utility u# that will cause it to give us the message M_1={q,q^y,p^y,q^{|y}} (which we will see with high probability P(¬E), an event the AI will not optimise for).

Define f(q,u)=-q^2 +2qu. The utility is then

  • u#(1) = v_E(f(q,u) + f(q^y,uv_y) + f(p^y,v_y) + f(q^{|y},u)v_y).

Maximising this u# ensures that, in the AI's best estimate,

  1. q = E(u),
  2. q^y = E(u ∧ (Y=y)),
  3. p^{y} = P(Y=y),
  4. q^{|y} = q^{y}/p^{y}=E(u|Y=y).

But we need not consider any specific y. The AI could send out the message M_2={y^+,y^-,y^{p+},y^{p-},q^+,q^-,p^+,p^-}, while maximising the utility

  • u#(2) = v_E (f(q^+,uv_{y^+}) + f(q^-,(1-u)v_{y^-}) + f(p^+,v_{y^{p+}}) + f(p^-,1-v_{y^{p-}})).

Then M will be such that, in the AI's best estimate,

  1. (y^+,q^+) = (argmax_y, max_y) E(u ∧ (Y=y)),
  2. (y^-,q^-) = (argmin_y, min_y) E(u ∧ (Y=y)),
  3. (y^{p+},p^+) = (argmax_y, max_y) P(Y=y),
  4. (y^{p-},p^-) = (argmin_y, min_y) P(Y=y),

Finally, define M_{3,r} = {y^{|+},y^{|-},q^{|+},q^{|-}}. And set f_r(q,u)= -(r-1)q^r+ rq^{r-1}u, and define the utility function:

  • u#(3,r) = v_E v_{y^{|+}}f_r(q{|+},u) + v_E v_{y^{|-}}f_r(q^{|-},1-u).

Then, for large enough r, the AI will output M so that, according to its best estimate,

  1. (y^{|+},q^{|+}) = (argmax_y, max_y) E(u|(Y=Y^+)),
  2. (y^{|-},q^{|-}) = (argmin_y, min_y) E(u|(Y=Y^+)).

Unfortunately the value of r cannot be known ahead of time. It might be possible to define a "limit" utility u#(3, ∞).

Note that all these messages and utilities are independent, so the AI can generate messages M_1 + M_2 + M_{3,r} + M_{3,r'} when maximising

  • u#(1) + u#(2) + u#(3,r) + u#(3,r').

But there are issues with very low probabilities, as explained in the previous post.

Instrumental Rationality Questions Thread

14 AspiringRationalist 22 August 2015 08:25PM

This thread is for asking the rationalist community for practical advice.  It's inspired by the stupid questions series, but with an explicit focus on instrumental rationality.

Questions ranging from easy ("this is probably trivial for half the people on this site") to hard ("maybe someone here has a good answer, but probably not") are welcome.  However, please stick to problems that you actually face or anticipate facing soon, not hypotheticals.

As with the stupid questions thread, don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better, and please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

(See also the Boring Advice Repository)

[Open Thread] Stupid Questions (2014-02-17)

3 solipsist 17 February 2014 05:34AM

This is part of a two-week experiment on having more open threads.

Obvious answers aren't always obvious.  If you feel silly for not understanding something, you're not alone.  Ask a question here.

Previous stupid questions

Other similar threads include:

Stupid Questions Thread - January 2014

10 RomeoStevens 13 January 2014 02:31AM

Haven't had one of these for awhile. This thread is for questions or comments that you've felt silly about not knowing/understanding. Let's try to exchange info that seems obvious, knowing that due to the illusion of transparency it really isn't so obvious!

Yet more "stupid" questions

7 NancyLebovitz 28 August 2013 03:58PM

This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous thread is at close to 500 comments.

More "Stupid" Questions

14 NancyLebovitz 31 July 2013 09:18AM

This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous "stupid" questions thread went to over 800 comments in two and a half weeks, so I think it's time for a new one.

"Stupid" questions thread

40 gothgirl420666 13 July 2013 02:42AM

r/Fitness does a weekly "Moronic Monday", a judgment-free thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. I thought this seemed like a useful thing to have here - after all, the concepts discussed on LessWrong are probably at least a little harder to grasp than those of weightlifting. Plus, I have a few stupid questions of my own, so it doesn't seem unreasonable that other people might as well. 

Stupid Questions Open Thread Round 2

15 OpenThreadGuy 20 April 2012 07:38PM

From Costanza's original thread (entire text):

This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well.  Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent.  If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant.

Meta:

 

  • How often should these be made? I think one every three months is the correct frequency.
  • Costanza made the original thread, but I am OpenThreadGuy. I am therefore not only entitled but required to post this in his stead. But I got his permission anyway.

 

Safe questions to ask an Oracle?

2 Stuart_Armstrong 27 January 2012 06:33PM

The Future of Humanity Institute wants to pick the brains of the less wrongers :-)

Do you have suggestions for safe questions to ask an Oracle? Interpret the question as narrowly or broadly as you want; new or unusual ideas especially welcome.

[Template] Questions regarding possible risks from artificial intelligence

7 XiXiDu 10 January 2012 11:59AM

I am emailing experts in order to raise and estimate the academic awareness and perception of risks from AI. Below are some questions I am going to ask. Please help to refine the questions or suggest new and better questions.

(Thanks goes to paulfchristiano, Steve Rayhawk and Mafred.)

Q1: Assuming beneficially political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans at science, mathematics, engineering and programming?

Q2: Once we build AI that is roughly as good as humans at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Q4: What probability do you assign to the possibility of an AI with initially (professional) human-level competence at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Q5: How important is it to figure out how to make superhuman AI provably friendly to us and our values (non-dangerous), before attempting to build AI that is good enough at general reasoning (including science, mathematics, engineering and programming) to undergo radical self-modification?

Q6: What probability do you assign to the possibility of human extinction as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)?