I'm not sure which further details you are after.
Thanks for the response! I'm looking for a formal version of the viewpoint you reiterated at the beginning of your most recent comment:
Yes, if the player is allowed access to entropy that Omega cannot have then it would be absurd to also declare that Omega can predict perfectly. [...] The problem specification needs to include a clause for how 'randomization' is handled.
That makes a lot of sense, but I haven't been able to find it stated formally. Wolpert and Benford's papers (using game theory decisi...
This seems like an opportunity for a startup. It could be a fun project to build startup weekend-style. The concept doesn't seem particular tied to the Less Wrong community, and (based on a couple minutes searching for "online study halls") there don't seem to be other prominent startups taking on this specific challenge.
This response challenges my intuition, and I would love to learn more about how the problem formulation is altered to address the apparent inconsistency in the case that players make choices on the basis of a fair coin flip. See my other post.
Thanks for this post; it articulates many of the thoughts I've had on the apparent inconsistency of common decision-theoretic paradoxes such as Newcomb's problem. I'm not an expert in decision theory, but I have a computer science background and significant exposure to these topics, so let me give it a shot.
The strategy I have been considering in my attempt to prove a paradox inconsistent is to prove a contradiction using the problem formulation. In Newcomb's problem, suppose each player uses a fair coin flip to decide whether to one-box or two-box. Then O...
Task management has become a passion of mine; for the last two years or so I've been trying to build something close to what you're describing. I think it's cool that you're giving this a shot. Here are some of my thoughts:
For example, I assumed the median staring salary for computer scientists was a reasonable estimate for what my starting salary would be. It turns out that I can expect to make about twice that much money if I use certain job hunting techniques I learned at the workshop and optimize for money (instead of, say, cool sounding problems).
What changed your expectation of your starting salary?
The potential benefits from private questioning should be weighed against the cost of the information not being visible to others. I like to see Wei_Dai's questions and the responses they elicit. I think the public exchanges have significant value beyond the immediate participants.
If our simulators are human, that implies that their universe has laws of physics similar to our own. But if we're living in a simulation, I think it's more plausible that our simulators exist in a world operating under different laws of physics (e.g. they live in a universe which is more amenable to our-universe-scale simulation.) So I think other factors are in play which could lessen the probability that we are being simulated by humans, let alone our future.
Or maybe just greater means. I imagine many humans would run universe-scale simulations, if they had the means.
For the probability estimates, I think it would be valuable to also ask for a ballpark estimate of how much time the survey-taker has put into thinking about each probability. Some people might spend (or have already spent) significantly more time thinking about these probabilities than others; gathering this information could provide a useful dimension for analysis.
It also creates potential time cost for people looking up what XX and XY chromosomes refer to. If you leave this question in the survey, can you at least include a heuristic for the uninformed, such as "heuristic: biologically female => XX; biologically male => XY)"?
I feel like the first paragraph of my original explanation of my situation addressed this, so maybe I don't understand what you're asking. Can you either rephrase your question or give an example of the kind of response you're looking for?
I started MyPersonalDev a year ago to develop a data-driven personal development web application. The minimum viable product I envision is a task manager for people who like to think about utility functions (give your tasks utility functions!) My long-range vision is to use machine learning and collective intelligence to automate things like next-action determinations, value-of-information calculations, and probability estimates. I've written most of the minimum viable product already and use it extensively to manage my own tasks, but I haven't released an...
Colby (at the Berkeley LW meetup) is wondering about what your market is - people interested in utility functions are economics professors and Less Wrong readers. How do you envision reaching out to more people and who would you reach out to?
Group consensus: Advertise it to somewhere like Less Wrong, then if people say its cool and email you, go with it, if you don't get good feedback, let it go.
What are you developing? Why are you developing it in PHP?
Thanks for this detailed post!
I have assumed a certain level of compromise when considering living situations. For example, I have assumed that people would not be willing to move a specific city for the primary purpose of joining an awesome living environment, but would instead be willing only to optimize within preexisting geographical constraints.
If there were enough people willing to relocate somewhere for the primary purpose of establishing an awesome living environment, that opens up a new class of opportunities more appealing than the ones I've been...
Taboo "coordinate".
What do you think are the best places to live?
I enjoyed reading your analysis. If there's anything in particular you want input on, I'd be happy to share my perspective.
Thanks for sharing. What's your plan? How much of your time do you think it would be optimal to spend assessing your options with regard to where to live?
I love the idea of living with "agent-y" rationalists, but I definitely don't love the idea of slowly discovering that I'm intractably not motivated or smart enough to truly "hang."
My impression is that the majority of aspiring rationalists are willing to work with each other through our flaws, rather than expecting perfection. I suspect the smartest, most popular people in the rat...
Thanks for posting this. It inspired me to write a more general roommate coordination thread. I'm interested in the living situation you describe, but my housing situation is set until I finish my computer science degree in May. I also don't have a steady source of income right now.
When considering my prospects about where to live post-graduation, I'm torn between Silicon Valley and places that might have a higher quality/cost ratio. Can you share some of your rationale for choosing Silicon Valley over your other options? How would not having a steady source of income change your thinking about where to live?
Are you looking to move in there?
Discuss the concept of this thread here. For example, how could it be more useful? What would you do differently?
I'm not sure it will be very useful without a sticky feature, which we really need for a number of threads. We have all sorts of threads like this that could be stickied and be very helpful, but I'm afraid that at some point, this will drift off the front page and never be seen again.
I attended the Center for Applied Rationality's June rationality camp in Berkeley, and would very much like to have a full-time living environment similar to the environment at camp. I'm very interested in joining or working to create a living environment that values open communication and epistemic hygiene, facilitates house-wide life-hacking experimentation, provides a collaborative, fulfilling environment to live and work in, and those sorts of things.
I'll finish my computer science degree in May, and I plan to make changes to my living situation at tha...
I'm interested in idea 2. If you write about it, I'm especially interested in what you think we should do about it.
There are many different ways we could represent a personality (to varying degrees of accuracy.) I have not found a widely-accepted format, but I think we can each make our own for now. Whenever you wonder why someone acted a certain way, think about what the relevant parameters might have been and write them down. If several people work on this and share their results, perhaps one or more standardized personality representation formats will emerge.
The parameters collected by online user profiles such as those maintained by Facebook, Google Plus, or OkCupi...
I like "AI Risk Reduction Institute". It's direct, informative, and gives an accurate intuition about the organization's activities. I think "AI Risk Reduction" is the most intuitive phrase I've heard so far with respect to the organization.
I'm writing a forward planner to help me figure out whether to attend university for another year to finish my computer science degree, or do something else such as working for my startup full-time. I have a working prototype of the planner but still need to input most of the possible actions and their effects.
I chose this project because I think my software will do a better job assessing the utility of alternatives than my intuition, and because I implemented a forward planner for an artificial intelligence class I'm taking and wanted to apply something similar to my own life to help me plan my future.
Thank you. Your comment resolved some of my confusion. While I didn't understand it entirely, I am happy to have accrued a long list of relevant background reading.
I have several questions. I hadn't asked them because I thought I should do more research before taking up your time. Here are some examples:
1) Yes, the solution should be an agent program. It can't be something as simple as "return 1", because when I talk about solving the LPP, there's an implicit desire to have a single agent that solves all problems similar enough to the LPP, for example the version where the agent's actions 1 and 2 are switched, or where the agent's source code has some extra whitespace and comments compared to its own quined representation, etc.
2) We imagine the world to be a program with no arguments that returns a utility value, and the agent to be a subprogram...
My education in decision theory has been fairly informal so far, and I've had trouble understanding some of your recent technical posts because I've been uncertain about what assumptions you've made. I think more explicitly stating your assumptions could lessen the frequency of arguments about assumptions by decreasing the frequency of readers mistakenly believing you've made different assumptions. It could also decrease inquiries about your assumptions, like the one I made on your post on the limited predictor problem.
One way to do this could be to, in yo...
In section 2, you say:
Unfortunately you can't solve most LPPs this way [...]
By solving most LPPs, do you mean writing a general-purpose agent program that correctly maximizes its utility function under most LPPs? I tried to write a program to see if I could show a counterexample, but got stuck when it came to defining what exactly a solution would consist of.
Does the agent get to know N? Can we place a lower bound on N to allow the agent to time to parse the problem and become aware of its actions? Otherwise, wouldn't low N values force failure for any non-trivial agent?
Hi! I'm Patrick Shields, an 18-year-old computer science student who loves AI, rationality and musical theater. I'm happy I finally signed up--thanks for the reminder!
I'd like to cite this article (or related published work) in a research project paper I'm writing which includes application of an expected utility-maximizing algorithm to a version of the prisoner's dilemma. Do you have anything more cite-able than this article's URL and your LW username? I didn't see anything in your profile which could point me towards your real name and anything you might have published.