Suppose you want to collect some kind of data from a population, but people vary widely in their willingness to provide the data (eg maybe you want to conduct a 30 minute phone survey but some people really dislike phone calls or have much higher hourly wages this funges against).
One thing you could do is offer to pay everyone dollars for data collection. But this will only capture the people whose cost of providing data is below , which will distort your sample.
Here's another proposal: ask everyone for their fair price to provide the dat...
Assorted followup thoughts:
Having young kids is mind bending because it's not uncommon to find yourself simultaneously experiencing contradictory feelings, such as:
It's instrumentally useful for early AGIs to Pause development of superintelligence for the same reasons as it is for humans. Thus preliminary work on policy tools for Pausing unfettered RSI is also something early AGIs could be aimed at, even if it's only half-baked ideas available on the eve of potential takeoff, as the AGIs are proving hard to aim and start doing things for their own reasons.
Every now and then in discussions of animal welfare, I see the idea that the "amount" of their subjective experience should be weighted by something like their total amount of neurons. Is there a writeup somewhere of what the reasoning behind that intuition is? Because it doesn't seem intuitive to me at all.
From something like a functionalist perspective, where pleasure and pain exist because they have particular functions in the brain, I would not expect pleasure and pain to become more intense merely because the brain happens to have more neurons. Rather...
Are there known "rational paradoxes", akin to logical paradoxes ? A basic example is the following :
In the optimal search problem, the cost of search at position i is C_i, and the a priori probability of finding at i is P_i.
Optimality requires to sort search locations by non-decreasing P_i/C_i : search in priority where the likelyhood of finding divided by the cost of search is the highest.
But since sorting cost is O(n log(n)), C_i must grow faster than O(log(i)) otherwise sorting is asymptotically wastefull.
Do you know any other ?
If military AGI is akin to nuclear bombs, then would it be justified to attack the country trying to militarize AGI? What would the first act of war in future wars be?
If a country A is building a nuke, then the argument for country B to pre-emptively attack it is that the first act of war involving nukes would effectively end country B. In this case, the act of war is still a physical explosion.
In case of AI, what would be the first act of war akin to physical explosion? Would country B be able to even detect if AI is being used against it? If ...
Efficient Markets Hypothesis has plenty of exceptions, but this is too coarse-grained and distant to be one of them. Don't ask "what will happen, so I can bet based on that", ask "what do I believe that differs widely from my counterparties". This possibility is almost certainly "priced in" to the obvious bets (TSMC).
That said, you may be more correct than the sellers of long-term puts, so maybe it'll work out. Having a theory and then examining the details and modeling the specific probabilities is exactly what you should be doing...
I’m glad that there are radical activist groups opposed to AI development (e.g. StopAI, PauseAI). It seems good to raise the profile of AI risk to at least that of climate change, and it’s plausible that these kinds of activist groups help do that.
But I find that I really don’t enjoy talking to people in these groups, as they seem generally quite ideological, rigid and overconfident. (They are generally more pleasant to talk to than e.g. climate activists in my opinion, though. And obviously there are always exceptions.)
I also find a bunch of activist tactics very irritating aesthetically (e.g. interrupting speakers at events)
I feel some cognitive dissonance between these two points of view.
Maybe there's a filtering effect for public intellectuals.
If you only ever talk about things you really know a lot about, unless that thing is very interesting or you yourself are something that gets a lot of attention (e.g. a polyamorous cam girl who's very good at statistics, a Muslim Socialist running for mayor in the world's richest city, etc), you probably won't become a 'public intellectual'.
And if you venture out of that and always admit it when you get something wrong, explicitly, or you don't have an area of speciality and admit to get...
This is both a declaration of a wish, and a question, should anyone want to share their own experience with this idea and perhaps tactics for getting through it.
I often find myself with a disconnect between what I know intellectually to be the correct course of action, and what I feel intuitively is the correct course of action. Typically this might arise because I'm just not in the habit of / didn't grow up doing X, but now when I sit down and think about it, it seems overwhelmingly likely to be the right thing to do. Yet, it's often my "gut" and not my m...
This is a plausible rational reason to be skeptical of one's own rational calculations: that there is uncertainty, and that one should rationally have a conservativeness bias to account for it. What I think is happening though is that there's an emotional blocker than is then being cleverly back-solved by finding plausible rational (rather than emotional and irrational) reasons for it, of which this is one. So it's not that this is a totally bogus reason, it's that this actually provides a plausible excuse for what is actually motivated by something different.
There's a history here of discussion of how to make good air purifiers (like this). Today I learned about ULPA filters and found someone's DIY video using one of them.
A ULPA filter can remove from the air at least 99.999% of dust, pollen, mold, bacteria and any airborne particles with a minimum particle penetration size of 120 nanometres.
I recently moved to a place with worse air quality. The fatiguing effect on me is noticeable to me (though I suspect I might have vulnerable physiology). It makes me want to try to update far in the other direction: maybe ...
The tree of https://www.lesswrong.com/posts/adk5xv5Q4hjvpEhhh/meta-new-moderation-tools-and-moderation-guidelines?commentId=uaAQb6CsvJeaobXMp spans over two hundred comments from ~fifteen authors by now, so I think it is time to list the major points raised there.
Please take "uld" as abbreviation for "in current state of LessWrong, to proceed closer to being actually less wrong AND also build path to further success, moderation should"; though it would be interesting to know if you think the optimal tactic would change later.
Feel free to agree/disagree-rea...
superintelligence may not look like we expect. because geniuses don't look like we expect.
for example, if einstein were to type up and hand you most of his internal monologue throughout his life, you might think he's sorta clever, but if you were reading a random sample you'd probably think he was a bumbling fool. the thoughts/realizations that led him to groundbreaking theories were like 1% of 1% of all his thoughts.
for most of his research career he was working on trying to disprove quantum mechanics (wrong). he was trying to organize a political movemen...
Diary of a Wimpy Kid, a children's book published by Jeff Kinney in April 2007 and preceded by an online version in 2004, contains a scene that feels oddly prescient about contemporary AI alignment research. (Skip to the paragraph in italics.)
...Tuesday
Today we got our Independent Study assignment, and guess what it is? We have to build a robot. At first everybody kind of freaked out, because we thought we were going to have to build the robot from scratch. But Mr. Darnell told us we don't have to build an actual robot. We just need to come up with ideas for
-"Nobody actually believed there's only four types of stories... well okay not nobody, obviously once the pithy observation that a Freshman writing class produced works that could easily be categorized into four types of stories was misquoted as saying all stories follow that formula, then someone believed it."
-"You're confusing Borges saying that there are four fundamental stories with John Gardner's exercise for students. Borges said the archetypes of the four fundamental stories are the archetypes are the Siege of Troy - a strong city surrounded and def...
I'm trying to understand if folk who don't see this as stealing don't think that stealing opportunity is a significant thing, or don't get how this is stealing opportunity, or something else that I'm not seeing.
And what arguments have they raised? Whether you agree or feel they hold water or not is not what I'm asking - I'm wondering what arguments have you heard from the "it is not theft" camp? I'm wondering if they are different from the ones I've heard