More specifically, epistemology is a formal field of philosophy. Epistemologists study the interaction of knowledge with truth and belief. Basically, what we know and how we know it. They work to identify the source and scope of knowledge. An epistemological statement example goes something like this; I know I know how to program because professors who teach programming, authoritative figures, told me so by giving me passing grades in their classes.
Quite right about attachment. It may take quite a few exceptions before it is no longer an exception. Particularly if the original concept is regularly reinforced by peers or other sources. I would expect exceptions to get a bit more weight because they are novel, but no so much as to offset higher levels of reinforcement.
While the Freudian description is accurate relative to sources, I struggle to order them. I believe it is an accumulated weighting that makes one thought dominate another. We are indeed born with a great deal of innate behavioral weighting. As we learn, we strengthen some paths and create new paths for new concepts. The original behaviors (fight or flight, etc.) remain.
Based on this known process, I conjecture that experiences have an effect on the weighting of concepts. This weighting sub-utility is a determining factor in how much impact a concept has on...
Kaj,
Thank you. I had noticed that as well. It seems the LW group is focused on a much longer time horizon.
In every human endeavor, humans will shape their reality, either physically or mentally. They go to schools where their type of people go and live in neighborhoods where they feel comfortable based on a variety of commonalities. When their circumstances change, either for the better or the worse, they readjust their environment to fit with their new circumstances.
The human condition is inherently vulnerable to wireheading. A brief review of history is rich with examples of people attaining power and money who subsequently change their values to suit thei...
Well, I'm a sailor and raising the waterline is a bad thing. You're underwater when the waterline gets too high.
Thanks for the feedback. I agree on the titling; I started with the title on the desired papers list, so wanted some connection with that. I wasn't sure if there was some distinction I was missing, so proceeded with this approach.
I know it is controversial to say super intelligence will appear quickly. Here again, I wanted some tie to the title. It is a very complex problem to predict AI. To theorize about anything beyond that would distract from the core of the paper.
While even more controversial, my belief is that the first AGI will be a super intelligen...
I struggle with conceiving wanting to want, or decision making in general, as a tiered model. There are a great many factors that modify the ordering and intensity of utility functions. When human neurons fire they trigger multiple concurrent paths leading to a set of utility functions. Not all of the utilities are logic-related.
I posit that our ability to process and reason is due to this pattern ability and any model that will approximate human intelligence will need to be more complex than a simple linear layer model. The balance of numerous interactiv...
Granted. My point is the function needs to comprehend these factors to come to a more informed decision. Simply doing a compare of two values is inadequate. Some shading and weighting of the values is required, however subjective that may be. Devising a method to assess the amount of subjectivity would be an interesting discussion. Considering the composition of the value is the enlightening bit.
I also posit that a suite of algorithms should be comprehended with some trigger function in the overall algorithm. One of our skills is to change modes to suit a ...
For a site promoting rationality this entire thread is amazing for a variety of reasons (can you tell I'm new here?). The basic question is irrational. The decision for one situation over another is influenced by a large number of interconnected utilities.
A person, or an AI, does not come to a decision based on a single utility measure. The decision process draws on numerous utilities, many of which we do not yet know. Just a few utilities are morality, urgency, effort, acceptance, impact, area of impact and value.
Complicating all of this is the overlay o...
I am disappointed that my realistic and fact based observation generated a down vote.
At the risk of an additional down vote, but in the interest of transparent honest exchange, I am pointing out a verifiable fact, however unsavory it may be interpreted.
If over time the time cost of intermediaries (additional handling and overhead costs) remains below the cost of the steps to eliminate intermediaries (the investment required to establish a 501(c)(3)) then I stand corrected. While an improbable situation, it could well be possible.
200k years ago when Homo Sapiens first appeared, fundamental adaptability was the dominant force. The most adaptable, not the most intelligent, survived. While adaptability is a component of intelligence, intelligence is not a component of adaptability. The coincidence with the start of the ice age is consistent with this. The ice age is a relatively minor extinction event, but none the less the appearance and survival of Homo Sapiens is consistent, where less adaptable life forms did not survive.
Across the Hominidae family Homo Sapiens proved to be most ...
I'm new here, so watch your toes...
As has been mentioned or alluded to, the underlying premise may well be flawed. By considerable extrapolation, I infer that the unstated intent is to find a reliable method for comprehending mathematics, starting with natural numbers, such that an algorithm can be created that consistently arrives at the most rational answer, or set of answers, to any problem.
Everyone reading this has had more than a little training in mathematics. Permit me to digress to ensure everyone recalls a few facts that may not be sufficiently a...
Joshua,
Thank you for the feedback.
I do need to increase the emphasis on the focus, which is the first premise you mentioned. I did not do that in this draft with the intent of eliciting feedback on the viability and interest in the model concept.
I will use formal techniques, which one(s) I have not yet settled on. At the moment, I am leaning to the processes around use case development to decompose current AI models into the componentry. For the weighting and gap calculations some statistical methods should help.
I am mulling over Bill Hibbard's 2012 AGI p...
The sheer volume of the scenarios diminishes the likelihood of any one of them. The numerous variations indicate an intractable predictability. While subject to conjunction bias, a more granular approach is the only feasible method to determine even a hint of the pre-FAI environment. Only a progressively refined model can provide information of value.
As I noted on the 80,000 Hours thread, intermediaries are nearly always an added expense on the distribution side. In this case, distribution of donations. The immediate impact is that fewer donation dollars (or whatever currency) find their way to the target organizations. The exception is if an intermediate organization facilitates a 100% pass-through, due to other funding or altruistic efforts.
The intermediary here is mostly notional. CEA is the only entity with legal existence, but on a practical level nearly all employees are effectively GWWC, 80k employees etc., with the CEA employees mainly being for shared services such as operations. So there isn't really much of overhead or anything.
I am compelled to point to a fundamental supply chain issue; intermediary drag. Simply stated, the greater the number of steps, the greater the overhead expense. While aggregators have some advantage on the purchasing side, they are an added expense on the distribution side in the vast majority of cases. If they enable some form of extended access, intermediaries may have a value, but the limited nature of charitable donations would make intermediaries an unlikely advantage.
Hello,
I am Jay Swartz, no relation to Aaron. I have arrived here via the Singularity Institute and interactions with Louie Helm and Malo Bourgon. Look me up on Quora to read some of my posts and get some insight to my approach to the world. I live near Boulder, Colorado and have recently started a MeetUp; The Singularity Salon, so look me up if you're ever in the area.
I have an extensive background in high tech, roughly split between Software Development/IT and Marketing. In both disciplines I have spent innumerable hours researching human behavior and tho...
I think a semantic check is in order. Intuition can be defined as an immediate cognition of a thought that is not inferred by a previous cognition of the same thought. This definition allows for prior learning to impact intuition. Trained mathematicians will make intuitive inferences based on their training, these can be called breakthroughs when they are correct. It would be highly improbable for an untrained person to have the same intuition or accurate intuitive thoughts about advanced math.
Intuition can also be defined as untaught, non-inferential, pu... (read more)