The Girardian conclusion, and general approach of this text make sense.
But the strategy that is best is forgiving 2 tits for tat, or something like that, worth emphasizing.
Also it seems you are putting some moral value in long term mating that doesn't necessarily reflect our emotional systems or our evolutionary drives. Short tem mating is very common and seen in most societies where there's enough resources to go around and enough intersexual geographical proximity. Recently there are more and stronger arguments emerging against female short t
This sounds cool. Somehow it reminded me of an old, old essay by Russell on architecture.
It's not that relevant, so just if people are curious
Interesting that I conveyed that. I agree with Owen Cotton Barratt that we ought to focus efforts now into sooner paths (fast takeoff soon) and not in the other paths because more resources will be allocated to FAI in the future, even if fast takeoff soon is a low probability.
I personally work on inserting concepts and moral concepts on AGI because almost any other thing I could do there are people who will do better already, and this is an area that interpolates with a lot of my knowledge areas, while still being AGI relevant. See link in the comment above with my proposal.
Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.
I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That's what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly - if they actually needed it - AI money should reach Nick, Paul and Stuart before our team.) We'll be presenting it in Oxford, tomorrow?? Sh...
He have non-confirmed simplified hypothesis with nice drawings for how microcircuits in the brain work. The ignore more than a million things (literally, they just have to ignore specific synapses, the multiplicity of synaptic connection etc... if you sum those things up, and look at the model, I would say it ignores about that many things). I'm fine with simplifying assumptions, but the cortical microcircuit models are a butterfly flying in a hurricane.
The only reason we understand V1 is because it is a retinotopic inverted map that has been through very...
Wow, that's so cool! My message was censored and altered.
Lesswrong is growing an intelligentsia of it's own.
(To be fair to the censoring part, the message contained a link directly to my Patreon, which could count as advertising? Anyway, the alteration was interesting, it just made it more formal. Maybe I should write books here, and they'll sound as formal as the ones I read!)
Also fascinating that it was near instantaneous.
No, that's if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don't care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them.
But the reference class of Diego's thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p
US Patent No. 4,136,359: "Microcomputer for use with video display"[38]—for which he was inducted into the National Inventors Hall of Fame. US Patent No. 4,210,959: "Controller for magnetic disc, recorder, or the like"[39] US Patent No. 4,217,604: "Apparatus for digitally controlling PAL color display"[40] US Patent No. 4,278,972: "Digitally-controlled color signal generation means for use with display"[41]
Basically because I never cared much for cryonics, even with the movie about me being done about it. Trailer:
https://www.youtube.com/watch?v=w-7KAOOvhAk
For me cryonics is like soap bubbles and contact improv. I like it, but you don't need to waste your time knowing about it.
But since you asked: I've tried to get rich people in contact with Robert McIntyre, because he is doing a great job and someone should throw money at him.
And me, for that matter. All my donors stopped earning to give, so I'm with no donor cashflow now, I might have to "retire&qu...
Yes I am.
Step 1: Learn Bayes
Step 2: Learn reference class
Step 3: Read 0 to 1
Step 4: Read The Cook and the Chef
Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically
Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it.
Step 7: Talk to Michael Valentine about it, who has been reaso...
I am particularly skeptical of transhumanism when it is described as changing the human condition, and the human condition is considered to be the mental condition of humans as seen from the human's point of view.
We can make the rainbow, but we can't do physics yet. We can glimpse at where minds can go, but we have no idea how to precisely engineer them to get there.
We also know that happiness seems tighly connected to this area called the NAcc of the brain, but evolution doesn't want you to hack happiness, so it put the damn NAcc right in the medial sli...
Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I've merely read all of FHI, most of MIRI, half of AIMA, Paul's blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don't code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system's cognitive abilities can achieve. I have also not seen very robus...
EA is an intensional movement.
http://effective-altruism.com/ea/j7/effective_altruism_as_an_intensional_movement/
I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course:
My PHD will likely be a book on altruism, and any respectable altruist these days is worried about AI at least 30% of his waking life.
That's how I see it anyway. Mos...
I'm looking for a sidekick if someone feels that such would be an appropriate role for them. This is me for those who don't know me:
https://docs.google.com/document/d/14pvS8GxVlRALCV0xIlHhwV0g38_CTpuFyX52_RmpBVo/edit
And this is my flowchart/life;autobiography in the last few years:
https://drive.google.com/file/d/0BxADVDGSaIVZVmdCSE1tSktneFU/view
Nice to meet you! :)
Polymathwannabe asked: What would be your sidekick's mission?
R: It feels to me like that would depend A LOT on the person, the personality, our physical distance, availability and interaction typ...
My take is that what matters in fun versus work is where the locus of control is situated. That is, where does your subjective experience tell you the source of you doing that activity comes from.
If it comes from within, then you count it as fun. If it comes from the outside, you count it as work.
This explains your feeling, and explains the comments in this thread as well. When past-self sets goals for you, you are no longer the center of locus of control. Then it feels like negatively connoted work.
That's how it is for me anyway.
That is false. Bostrom thought of FAI before Eliezer. Paul thought of the Crypto. Bostrom and Armstrong have done more work on orthogonality. Bostrom/Hanson came up with most of the relevant stuff in multipolar scenarios. Sandberg/EY were involved in the oracle/tool/sovereign distinction.
TDT, which is EY work does not show up prominently in Superintelligence. CEV, of course, does, and is EY work. Lots of ideas on Superintelligence are causally connected to Yudkowksy, but no doubt there is more value from Bostrom there than from Yudkowsky.
Bostrom got 1.5...
Bostrom thought of FAI before Eliezer.
To be completely fair, although Nick Bostrom realized the importance of the problem before Eliezer, Eliezer actually did more work on it, and published his work earlier. The earliest publication I can find from Nick on the topic is this short 2003 paper basically just describing the problem, at which time Eliezer had already published Creating Friendly AI 1.0 (which is cited by Nick).
Would you be willing to run a survey on Discussion also about Main being based on upvotes instead of a mix of self-selection and moderation? As well as all ideas that seem interesting to you that people suggest here?
There could be a research section, a Upvoted section and a discussion section, where the research section is also displayed within the upvoted, trending one.
Arrogance: I caution you not to take this as advice for you to your own life, because frankly, arrogance goes a long, long loooooong way. Most rationalists are less arrogant in person than they should about their subject areas, and rationalist women who identify as females and are straight are even less frequently arrogant than the already low base rate. But some people are over-arrogant, and I am one of these. Over arrogance isn't about the intensity of arrogance, it is about the non-selectivity. The problem I have always had and been told again and again...
A Big Fish in a Small Pond: for many years I assumed it was better to be a big fish in a small pond than to try to be a big fish in the ocean. This can be decomposed into a series of mistakes, only part of which I learned to overcome so far.
1)It is based on the premise that social rankings matter more than they actually do. Most of day to day life is determined by environment, and being in a better environment, surrounded by better and different people is more valuable experientially and in terms of output than being a big fish in a small pond.
2)It enco...
I think this depends on how exactly the big fish treat the small fish in the pond/ocean. For example, if you take a job where your colleagues are more skilled than you, which of the following scenarios is more likely?
a) You will have a lot of opportunity to learn from your colleagues: you will be able to watch them work, to see how they solve problems; if you make a mistake they will explain you what that was wrong and what you could have done instead. You will learn a lot, and a few years later you will be one of those experts.
b) You will be the at the bo...
I much enjoyed your posts so far Kaj, thanks for creating them.
I'd like to draw attention, in this particular one, to
Viewed in this light, concepts are cognitive tools that are used for getting rewards.
to add a further caveat: though some concepts are related to rewards, and some conceptual clustering is done in a way that maps to the reward of the agent as a whole, much of what goes on in concept formation, simple or complex, is just the wire together, fire together old saying. More specifically, if we are only calling "reward" what is a r...
If you are particularly interested in sexual status, I wrote about it before here, dispelling some of the myth.
Usually dominance is related to a power that is maintained by agression, stress or fear.
The usual search route will lead you to some papers: https://scholar.google.com/scholar?q=prestige+dominance&btnG=&hl=en&as_sdt=0%2C5&as_ylo=2009
What I would do would be find some 2015 2014 papers and check their bibliography, or ask the principal investigator about which papers are more interesting on it.
I have a standing interest in other primates and cetaceans as well, so I'd look for attempts to show that others have or don't have prestige.
Should the violin players at Titanic have stopped playing the violin and tried to save more lives?
What if they could have saved thousands of Titanics each? What if there already was such a technology that could play a deep sad violin song on the background, and project holograms of violin players playing in deep sorrow as the ship sank.
At some point, it becomes obvious that doing the consequentialist thing is the right thing to do. The question is whether the reader believes 2015 humanity has already reached that point or not.
We already produce beauty, ...
Why not actual fields medalists?
Tim Ferris lays out a guide for how to learn anything really quickly, which involves contacting whoever was great at that ten years ago and asking them who is great that should not be.
Doing that for field medalists and other high achievers is plausibly extremely high value.
Hard Coded AI is less likely than ems, since ems which are copies or modified copies of other ems would instantly be aware that the race is happening, whereas most of the later stages of hard-coded AI could be concealed from strategic opponents for part of the period in which they would have made hasty decisions, if only they knew.
There is a gender difference in resource constraint satisfaction worth mentioning: males in most primate species are less resource constrained than females, including humans. The main reason why females require fewer resources to be emotionally satisfied is that the upper bound on how many resources are required to attract the males with the best genes, acquire their genes and parenting resources, and have nearly as many children as possible, as well as taking good care of these children and their children is limited. For males however, because there is ...
None of Miles's arguments resonates with me, basically because one counterargument could erase the pragmatic relevance of his points in one fell swoop:
The vast majority of expected value is on changing policies where the incentives are not aligned with ours. Cases where the world would be destroyed no matter what happened, or cases where something is providing a helping hand - such as the incentives he suggests - don't change where our focus should be. Bostrom knows that, and focuses throughout on cases where more consequences derive from our actions. It's...
None of Miles's arguments resonates with me, basically because one counterargument could erase the pragmatic relevance of his points in one fell swoop:
The vast majority of expected value is on changing policies where the incentives are not aligned with ours. Cases where the world would be destroyed no matter what happened, or cases where something is providing a helping hand - such as the incentives he suggests - don't change where our focus should be. Bostrom knows that, and focuses throughout on cases where more consequences derive from our actions. It...
Copied from the Heterodox Effective Altruism facebook group (https://www.facebook.com/groups/1449282541750667/):
Giego Caleiro I've read the comments and now speak as me, not as Admin:
It sems to me that the Zurich people were right to exclude Roland from their events. Let me lay out the reasons I have, based on extremely partial information:
1) IF Roland brings back topics that are not EA, such as 9/11 and Thai prostitutes, it is his burden to both be clear and to justify why those topics deserve to be there.
2) The politeness of EAs is in great part... (read more)
(Moderator note: I banned Diego partially for a long history of inflamatory commenting, but mostly for various highly deceptive and manipulative actions he took in the Bay Area community. This comment doesn’t have much to do with that, but it reminded me that he was still around.)