All of Daphne_W's Comments + Replies

I asked you to try out avoiding quotation marks. You said I hate you.

I'm guessing that people who "made it" have a bunch of capital that they can use to purchase AI labor under the scenario you outline (i.e., someone gets superintelligence to do what they want). 

If the superintelligence is willing to deprive people of goods and services because they lack capital, then why would it be empathetic towards those that have capital? The superintelligence would be a monopsony and monopoly, and could charge any amount for someone existing for an arbitrarily short amount of time. Assuming it even respects property law when it i... (read more)

The Demon King does not solely attack the Frozen Fortress to profit on prediction markets. The story tells us that the demons engage in regular large-scale attacks, large enough to serve as demon population control. There is no indication that these attacks decreased in size when they were accompanied with market manipulation (and if they did, that would be a win in and of itself).

So the prediction market's counterfactual is not that the Demon King's forces don't attack, but that they attack at an indeterminate time with the same approximate frequency and ... (read more)

That's probably the only "military secret" that really matters.

The soldiers guarding the outer wall and the Citadel treasurer that pays their overtime wages would beg to differ. 

2[anonymous]
If you think I missed the point, can you explain in more detail? Here is my model: Demon king buys shares in “The Demon King will attack the Frozen Fortress”, then sends some small technically-an-attack to the fortress so the market resolves yes, and knowing this will be done is not worth the money lost to the Demon King on the market. No serious-battle plans or military secrets are leaked, and more generally the Demon King would only do this if the information revealed weren't worth the market cost. (i.e. it's a central kind of prediction market outcome manipulation, i.e. exploiting how this prediction market assumed a kind of metaphysical gap between predictors and the world / knowledge and action)   Do you disagree with this, or think it's true but misses the point, in which case what was the point?

I think that AI people that are very concerned about AI risk tend to view loss of control risk as very high, while eternal authoritarianism risks are much lower.

I'm not sure how many people see the risk of eternal authoritarianism as much lower and how many people see it as being suppressed by the higher probability of loss of control[1]. Or in Bayesian terms:

P(eternal authoritarianism) = P(eternal authoritarianism | control is maintained)  P(control is maintained)

Both sides may agree that P(eternal authoritarianism | control is maintained) is h... (read more)

2Noosphere89
Yeah, from a more technical perspective, I forgot to add that condition where loss of control is maintained or removed in the short/long run as an important variable to track.

As far as I understand, "a photon in a medium" is a quasiparticle. Actual photons always travel at the speed of light, and the "photon" that travels through glass at a lower speed is the sum of an incredibly complicated process that cancels out perfectly into something that can be described as one or several particles if you squint a little because the energy of the electromagnetic field excitation can't be absorbed by the transparent material and because of preservation of momentum.

The model of the photon "passing by atoms and plucking them" is a lie to c... (read more)

3Ben
Yes, you are certainly right it is a quasiparticle. People often use the word polariton to name it (eg https://www.sciencedirect.com/science/article/pii/S2666032620300363#bib1 ). I think you might have muddled the numbering? It looks like you have written an argument in favor of either [2] or [3] (which both hold that the momentum of the full polariton is larger than the momentum of the photonic part alone - in the cartoon of the original post whether or not the momentum "in the water" is included), then committed to [1] instead at the end. This may be my fault, as the order I numbered the arguments in the summary at the end of the post didn't match the order they were introduced, and [2] was the first introduced. (In hindsight this was probably a bad way to structure the post, sorry about that!)  " "passing by atoms and plucking them" is a lie to children " - I personally dislike this kind of language. There is nothing wrong with having mental images that help you understand what is going on. If/when those images need to be discarded then I don't think belittling them or the people who use them is helpful. In this case the "plucking" image shows that at any one time some of the excitation is in the material, which is the same thing you conclude. [In this case I think the image is acceptably rigorous anyway, but lets not litigate that because which mental images are and are not compatible with a quantum process is a never ending rabbit hole.] Thank you very much for reading and for your thoughts. If I am correct about the numbering muddle it is good to see more fellow [2/3]'ers.

Though compare and contrast Dune's test of the gom jabbar:

You've heard of animals chewing off a leg to escape a trap? There's an animal kind of trick. A human would remain in the trap, endure the pain, feigning death that he might kill the trapper and remove a threat to his kind.

Even if you are being eaten, it may be right to endure it so that you have an opportunity to do more damage later.

I mean, we're getting this metaphor off its rails pretty fast, but to derail it a bit more:

The kind of people who lay human-catching bear traps aren't going to be fooled by "Oh he's not moving it's probably fine".

Everybody likes to imagine they'd be the one to survive the raiding/pillaging/mugging, but the nature of these predatory interactions is that the people doing the victimizing have a lot more experience and resources than the people being victimized. (Same reason lots of criminals get caught by the police.)

If you're being "eaten", don't try to get clever. Fight back, get loud, get nasty, and never follow the attacker to a second location.

You're suggesting angry comments as an alternative for mass retributive downvoting. That easily implies mass retributive angry comments.

As for policing against systemic bias in policing, that's a difficult problem that society struggles with in many different areas because people can be good at excusing their biases. What if one of the generals genuinely makes a comment people disagree with? How can you determine to what extent people's choice to downvote was due to an unauthorized motivation?

It seems hard to police without acting draconically.

Just check their profile for posts that do deserve it that you were previously unaware of. You can even throw a few upvotes at their well-written comments. It's not brigading, it's just a little systemic bias in your duties as a user with upvote-downvote authority.

Are you trying to prime people to harass the generals?

 

Besides, it's not mass downvoting, it's just that the increased attention to their accounts revealed a bunch of poorly written comments that people genuinely disagree with and happen to independently decide are worthy of a downvote :)

3habryka
Genuinely not sure what you are referring to. I think it's reasonable to be a bit annoyed at generals who get you nuked, but I mean, if someone starts going overboard we will also moderate that. Well, in that case, it's not moderation for downvoting, it's just increased attention from the moderators re-evaluating the degree to which someone is genuinely contributing positively to the site, and happen to independently decide someone is worthy of some moderation warnings :P

"why not just" is a standard phrase for saying what you're proposing would be simple or come naturally if you try. Combined with the rest of the comment talking about straightforwardness and how little word count, and it does give off a somewhat combatitive vibe.

I agree with your suggestion and it is good to hear that you don't intend it imply that it is simple, so maybe it would be worth editing the original comment to prevent miscommunication for people who haven't read it yet. For the time being I've strong-agreed with your comment to save it from a negativity snowball effect.

No. I would estimate that there are fewer rich people willing to sacrifice their health for more income than there are poor people willing to do the same. Rich people typically take more holidays, report higher job satisfaction, suffer fewer stress-related ailments, and spend more time and money on luxuries rather than reinvesting into their careers (including paying basic cost of living to be employable).

And not for lack of options. CEOs can get involved with their companies and provide useful labor by putting their nose to the grindstone, or kowtow to in... (read more)

There are rich people pushing themselves work 60+ hour days struggling to keep a smile on their face while people insult and demean them. And there are poor people who live as happy ascetics, enjoying the company of their fellows and eating simple meals, choosing to work few hours even if it means forgoing many things the middle class would call necessities.

There are more rich people that choose to give up the grind than poor people. It's tougher to accept a specific form of suffering if you see that 90% of your peers are able to solve the suffering with w... (read more)

1Sweetgum
Did you mean to say "There are more poor people that choose to give up the grind than rich people?"

You seem to approach the possible existence of a copy like a premise, with as question whether that copy is you. However, what if we reverse that? Given we define 'a copy of you' as another one of you, how certain is it that a copy of you could be made given our physics? What feats of technology are necessary to make that copy?

Also, what would we need to do to verify that a claimed copy is an actual copy? If I run ChatGPT-8 and ask it to behave like you would behave based on your brain scan and it manages to get 100% fidelity in all tests you can think of,... (read more)

Daphne_W50

In the post you mention Epistemic Daddies, mostly describing them as sources that are deferred to for object-level information.

I'd say there is also a group of people who seek Epistemic Mommies. People looking for emotional assurance that they're on the right path and their contribution to the field is meaningful; for assurance that making mistakes in reasoning is okay; for someone to do the annoying chores of epistemic hygiene so they can make the big conclusions; for a push to celebrate their successes and show them off to others; etc.

Ultimately both are... (read more)

3Elizabeth
I would say Epistemic Daddies are deferred to, for action and strategy, although sometimes with a gloss of giving object level information. But I think you're right that there's a distinction between "giving you strategy" and "telling you your current strategy is so good it's going right on the fridge", and Daddy/Mommy is a decent split for that. 
Daphne_W10

I kind of... hard disagree?

Effective Samaritans can't be a perfect utility inverse of Effective Altruists while keeping the labels of 'human', 'rational', or 'sane'. Socialism isn't the logical inverse of Libertarianism; both are different viewpoints on how to achieve the common goal of societal prosperity.

Effective Samaritans won't sabotage an EA social experiment any more than Effective Altruists will sabotage an Effective Samaritan social experiment. If I received a letter from Givewell thanking me for my donation that was spent on sabotaging a socialis... (read more)

On third order, people who openly worry about X-Risk may get influenced by their environment, becoming less worried as a result of staying with a company whose culture denies X-Risk, which could eventually even cause them to contribute negatively to AI Safety. Preventing them from getting hired prevents this.

That sounds like something a cross between learned helplessness and madman theory.

The madman theory angle is "If I don't respond well to threats of negative outcomes, people (including myself) have no reason to threaten me". The learned helplessness angle is "I've never been able to get good sets of tasks and threats, and trying to figure something out usually leads to more punishment, so why put in any effort?"

Combine the two and you get "Tasks with risks of negative outcomes? Ugh, no."

 

With learned helplessness, the standard mechanism for (re)learni... (read more)

I'd say "fuck all the people who are harming nature" is black-red/rakdos's view of white-green/selesnya. The "fuck X" attitude implies a certain passion that pure black would call wasted motion. Black is about power. It's not adversarial per se, just mercenary/agentic. Meanwhile the judginess towards others is an admixture of white. Green is about appreciating what is, not endorsing or holding on to it.

Black's view of green is "careless idiots, easy to take advantage of if you catch them by surprise". When black meets green, black notices how the commune's... (read more)

Fighting with tigers is red-green, or Gruul by MTG terminology. The passionate, anarchic struggle of nature red in tooth and claw. Using natural systems to stay alive even as it destroys is black-green, or Golgari. Rot, swarms, reckless consumption that overwhelms.

Pure green is a group of prehistoric humans sitting around a campfire sharing ghost stories and gazing at the stars. It's a cave filled with handprints of hundreds of generations that came before. It's cats louging in a sunbeam or birds preening their feathers. It's rabbits huddling up in their d... (read more)

That doesn't seem like a good idea. You're ignoring long-term harms and benefits of the activity - otherwise cycling would be net positive - and you're ignoring activity duration. People don't commute to work by climbing Mount Everest or going skydiving.

2Gunnar_Zarncke
I'm not ignoring them. I'm just comparing danger base rates. That's why "generally". The benefits of each activity depend on the user.
Daphne_W2416

I don't think it's precisely true. The serene antagonism that comes from having examined something and recognizing that it is worth taking your effort to destroy is different from the hot rage of offense. But of the two, I expect antagonism to be more effective in the long term.

  • Rage is accompanied with a surge of adrenalin, sympathetic nervous activation, and usually parasympathetic nervous suppression, that is not sustainable in the long term. Antagonism is compatible with physiological rest and changes in the environment.
  • Consequently, antagonism has acce
... (read more)

As far as I can tell, the AI has no specialized architecture for deciding about its future strategies or giving semantic meaning to its words. It outputting the string "I will keep Gal a DMZ" does not have the semantic meaning of it committing to keep troops out of Gal. It's just the phrase players that are most likely to win use in that boardstate with its internal strategy.

Like chess grandmasters being outperformed by a simple search tree when it was supposed to be the peak of human intelligence, I think this will have the same effect of disenchanting th... (read more)

3Lone Pine
I'd like to push back on "AI has beaten StarCraft". AlphaStar didn't see the game interface we see, it just saw an interface with exact positions of all its stuff and ability to make any commands possible. It's far from the mouse-and-keyboard that humans are limited to, and in SC that's a big limitation. When the AI can read the game state from the pixels and send mouse and keyboard inputs, then I'll be impressed.
8LawrenceC
This is incorrect; they use "honest" intentions to learn a model of message > intention, then use this model to annotate all the other messages with intentions, which then they then use to train the intent > message map. So the model has a strong bias toward being honest in its intention > message map. (The authors even say that an issue with the model is it has the tendency to spill too many of its plans to its enemies!) The reason an honest intention > message map doesn't lead to a fully honest agent is that the search procedure that goes from message + history > intention can "change its mind" about what the best intention is.  This is correct; every time AI systems reach a milestone earlier than expected, this is simultaneously an update upward on AI progress being faster than expected, and an update downward on the difficulty of the milestone. 

Dear M.Y. Zuo,

 

I hope you are well.

It is my experience that the conventions of e-mail are significantly more formal and precise in expectation when it comes to phrasing. Discord and Slack, on the other hand, have an air of informal chatting, which makes it feel more acceptable to use shortcuts and to phrase things less carefully. While feelings may differ between people and conventions between groups, I am quite confident that these conventions are common due to both media's origins, as a replacement for letters and memos and as a replacement for in-p... (read more)

1M. Y. Zuo
In my experience after the first few introductory emails, opening remarks, formalities, etc., are dropped as the introductions have already been made. Unless the opposite party is vastly more senior or higher rank, then perhaps the same style is retained, especially in more hierarchical organizations. For a place like Lightcone, if someone was still writing their 20th email to the same person like the above, I would seriously question their sanity. It's possible, even after all the paraphernalia is removed, that forming complete sentences increase the word count significantly, if the normal practice otherwise is to use slang and/or abbreviations everywhere.  Yet for that to 2x, or more, the total length seems really astonishing. What kind of Slack conversations are typical? Can you provide a real world example? ---------------------------------------- To look at it another way, I don't see how I could cut the above comment in half while retaining all the same meanings, there just aren't that many commonly known abbreviations or slang words.

That's a bit of a straw man, though to be fair it appears my question didn't fit into your world model as it does in mine.

For me, the insurrection was in the top 5 most informative/surprising US political events in 2017-2021. On account of its failure it didn't have as major consequences as others, but it caused me to update my world model more. For me, it was a sudden confrontation with the size and influence of anti-democratic movements within the Republican party, which I consider Trump to be sufficiently associated with to cringe from the notion of vot... (read more)

2WalterL
I'd agree that Jan 6th was top 5 most surprising US political events 2017-2021, though I'm not sure that category is big enough that top 5 is an achievement.  (That is, how many events total are in there for you?) I wasn't substantially surprised by it in the way that you were, however.  I'm not saying that I predicted it, mind you, but rather that it was in a category of stuff that felt at least Trump-adjacent from the jump.  As a descriptive example, imagine a sleezy used car salesman lies to me about whether the doors will fall off the car while I drive it home.  I plainly didn't expect that particular lie, since I fell for it, but the basic trend of 'this man will lie for his own profit' is baked into the persona from the get go. My model of American voters ending American democracy remains extremely low.  For better or for worse, that's just not in any real way how we roll.  Take a look at every anti democratic movement presently going, and you will see endless rhetoric about how they are really double secret truly democratic.  The clowns who want to pack the supreme court/senate are just trying to compensate for the framers not jock riding cities hard enough.  The stooges who want the VP to be able to throw out electors not for his party invent gibberish about how the framers intended this.  The people kicking folks off voter rolls chant about how they are preventing imaginary voter fraud.  That kind of movement, unwilling to speak its own name, has a ceiling on how hard it can go.  I believe that ceiling is lower than the bar they'd need to clear to seize power, and I think the last few years have borne this sentiment out. I'm not sure I exactly get your point re: how to measure Trump's time vs. hypothetical Clinton's time.  I will just repeat my sentiment that we can't know how they would have compared to one another, because Clinton's time will remain hypothetical.  It might have had more or less terrorism.  I will reiterate that the odds of terrorism be

With Trump/Republicans I meant the full range of questions from from just Trump, through participants in the storming of congress, to all Republican voters.

It seems quite easy for a large fraction of a population to be a threat to the population's interests if they share a particular dangerous behavior. I'm confused why you would think that would be difficult. Threat isn't complete or total. If you don't get a vaccine or wear a mask, you're a threat to immune-compromissd people but you can still do good work professionally. If you vote for someone attempti... (read more)

3WalterL
You should probably reexamine the chain of logic that leads you to the idea that the most important consequence of the electorate's decision in 2016 was the events of Jan 6th, 2021.  It isn't remotely true. To entertain the hypothetical, where what we care about when doing elections is how many terrorist assaults they produce, would be to compare the actual record of Trump to an imaginary record of President Clinton's 4 years in office.  How would you recommend I generate the latter?  Does the QAnon Shaman of the alternate timeline launch 0, 1, or 10 assaults on the capital if his totem is defeated 4 years earlier? A more serious reappraisal of the Trump/Clinton fork would focus on COVID, supreme court picks, laws that a democratic president would have veto'd vs. those Trump signed (are we giving Clinton a democratic congress, or is this alt history only a change in presidency?), international decisions where Trump's isolationist instincts would have been replaced by Clinton's interventionist ones, etc.  It is a serious and complicated question, but the events of Jan 6th play a minimal role in it.

Hey, I stumbled on this comment and I'm wondering if you've updated on whether you consider Trump/Republicans a threat to America's interests in light of the January 6th insurrection.

2WalterL
I'm not sure precisely what you mean, like, how would it work for like 1/3 of Americans to be a threat to America's interests? I think, roughly speaking, the answer you are looking for is 'no', but it is possible I'm misunderstanding your question.

People currently give MIRI money in the hopes they will use it for alignment. Those people can't explain concretely what MIRI will do to help alignment. By your standard, should anyone give MIRI money?

When you're part of a cooperative effort, you're going to be handing off tools to people (either now or in the future) which they'll use in ways you don't understand and can't express. Making people feel foolish for being a long inferential distance away from the solution discourages them from laying groundwork that may well be necessary for progress, or even from exploring.

As a concrete example of rational one-hosing, here in the Netherlands it rarely gets hot enough that ACs are necessary, but when it does a bunch of elderly people die of heat stroke. Thus, ACs are expected to run only several days per year (so efficiency concerns are negligible), but having one can save your life.

I checked the biggest Dutch-only consumer-facing online retailer for various goods (bol.com). Unfortunately I looked before making a prediction for how many one-hose vs two-hose models they sell, but even conditional on me choosing to make a point... (read more)

5denkenberger
I must admit I was surprised by the statistics here. It is true if you only use the air conditioner few days a year, the energy efficiency is not important. However, the cooling capacity is important. I think many people are using efficiency to mean cooling capacity above. Anyway, let's say the incremental cost of going from one hose to two hoses is $30. From working on Department of Energy energy efficiency rules, typically the marginal markup of an efficient product is less than the markup on the product overall (meaning that the incremental cost of just adding a hose is less than the $20 of buying it separately). It is true that with a smaller area for the air to come into the device with a hose, the velocity has to be higher, so the fan blades need to be made bigger (it typically is one motor powering two different fan blades on two sides, at least for window units). But then you could save money on the housing because the port is smaller. The incremental cost of motors is low. Then if the air conditioner cost $200 to start with, that would be 15% incremental cost. Then let's say the cooling capacity increased by 25% (I would say it actually does matter that a T-shirt was used, which would allow room area and instead of just outdoor air, so it probably would be higher than this). What this means is that the two hose actually has greater cooling capacity per dollar, so you should choose a small two hose even if you don't care about energy use at all. Strictly this is only true with no economies of scale, which is not a great assumption. But I think overall it will hold. Another case this would break down is if a person were plugging and unplugging many times, but I don't think that's the typical person. So I suspect what is going on is that people don't realize that the cooling capacity of the one hose is actually reduced more than the cost, so they should just be getting a smaller capacity two hose unit (at lower initial cost and energy cost). There is a broade

It feels more to me like we're the quiet weird kid in high school that doesn't speak up or show emotion because we're afraid of getting judged or bullied. Which, fair enough, the school is sort of like - just look at poor cryonics, or even nuclear power - but the road to popularity (let along getting help with what's bugging us) isn't to try to minimize our expressions to 'proper' behavior while letting us be characterized by embarrassing past incidents (e.g. Roko's Basilisk) if we're noticed at all.

It isn't easy to build social status, but right now we're trying next to nothing and we've seen it doesn't seem to do enough.

Agree that it's too shallow to take seriously, but

If it answered "you would say during text input batch 10-203 in January 2022, but subjectively it was about three million human years ago" that would be something else.

only seems to capture AI that managed to gradient hack the training mechanism to pass along its training metadata and subjective experience/continuity. If a language model were sentient in each separate forward pass, I would imagine it would vaguely remember/recognize things from its training dataset without necessarily being able to place them, like a human when asked when they learned how to write the letter 'g'.

Interventions on the order of burning all GPUs in clusters larger than 4 and preventing any new clusters from being made, including the reaction of existing political entities to that event and the many interest groups who would try to shut you down and build new GPU factories or clusters hidden from the means you'd used to burn them, would in fact really actually save the world for an extended period of time and imply a drastically different gameboard offering new hopes and options.

I suppose 'on the order of' is the operative phrase here, but that specifi... (read more)

AI can run on CPUs (with a certain inefficiency factor), so only burning all GPUs doesn't seem like it would be sufficient. As for disruptive acts that are less deadly, it would be nice to have some examples but Eliezer says they're too far out of the Overton Window to mention.

If what you're saying about Eliezer's claim is accurate, it does seem disingenuous to frame "The only worlds where humanity survives are ones where people like me do something extreme and unethical" as "I won't do anything extreme and unethical [because humanity is doomed anyway]". I... (read more)

I'm confused about A6, from which I get "Yudkowsky is aiming for a pivotal act to prevent the formation of unaligned AGI that's outside the Overton Window and on the order of burning all GPUs". This seems counter to the notion in Q4 of Death with Dignity where Yudkowsky says

It's relatively safe to be around an Eliezer Yudkowsky while the world is ending, because he's not going to do anything extreme and unethical unless it would really actually save the world in real life, and there are no extreme unethical actions that would really actually save the world

... (read more)

Interventions on the order of burning all GPUs in clusters larger than 4 and preventing any new clusters from being made, including the reaction of existing political entities to that event and the many interest groups who would try to shut you down and build new GPU factories or clusters hidden from the means you'd used to burn them, would in fact really actually save the world for an extended period of time and imply a drastically different gameboard offering new hopes and options.

What makes me safe to be around is that I know that various forms of angri... (read more)

5Vaniver
It definitely is the case that a pivotal act that isn't "disruptive" isn't a pivotal act. But I think not all disruptive acts have a significant cost in human lives. To continue with the 'burn all GPUs' example, note that while some industries are heavily dependent on GPUs, most industries are instead heavily dependent on CPUs. The hospital's power will still be on if all GPUs melt, and probably their monitors will still work (if the nanobots can somehow distinguish between standalone GPUs and ones embedded into motherboards). Transportation networks will probably still function, and so on. Cryptocurrencies, entertainment industries, and lots of AI applications will be significantly impacted, but this seems recoverable. But I do think Eliezer's main claim is: some people will lash out in desperation when cornered ("Well, maybe starting WWIII will help with AI risk!"), and Eliezer is not one of those people. So if he makes a call of the form "disruption that causes 10M deaths", it's because the other option looked actually worse, and so this is 'safer'. [If you're one of the people tied up on the trolley tracks, you want the person at the lever to switch it!]

Your method of trying to determine whether something is true or not relies overly much on feedback from strangers. Your comment demands large amounts of intellectual labor from others ('disprove why all easier modes are incorrect'), despite the preamble of the post, while seeming unwilling to put much work in yourself.

1M. Y. Zuo
Yes, when strong assertions are made, a lot of intellectual labor is expected if evidence is lacking or missing.  Plus, I wrote it in mind as being the first comment so it raises a few more points than I think is practical for the 100th comment. The preamble cannot justify points that are justified nowhere else, Or else it would be a simple appeal to authority. In the vast majority of cases people who understand what they don’t understand hedge their assertions, so since there was a lack of equally strong evidence, or hedging, to support the corresponding claims I was intrigued if they did exist and Elizer simply didn’t link it, which could be for a variety of reasons. That is another factor in why I left it open ended. It does seem I was correct for some of the points that the strongest evidence is less substantial than what the claims imply.   The other way I could see a reasonable person view it, is if I had read everything credible to do with the topic I wouldn’t have phrased it that way.  Though again that seems a bit far fetched since I highly doubt anyone has read through the preexisting literature completely across the many dozens of topics mentioned here and still remembers every point. In any case it would have been strange to put a detailed and elaborate critique of a single point in the very first comment where common courtesy is to leave it more open ended for engagement and to allow others to chime in. Which is why lc’s response seems so bizarre since they don’t even address any of the obvious rebuttals of my post and instead opens with a non-sequiter.

I think Yudkowsky would argue that on a scale from never learning anything to eliminating half your hypotheses per bit of novel sensory information, humans are pretty much at the bottom of the barrel.

When the AI needs to observe nature, it can rely on petabytes of publicly available datasets from particle physics to biochemistry to galactic surveys. It doesn't need any more experimental evidence to solve human physiology or build biological nanobots: we've already got quantum mechanics and human DNA sequences. The rest is just derivation of the consequence... (read more)

4Kayden
I assumed that there will come a time when the AGI has exhausted consuming all available human-collected knowledge and data.  My reasoning for the comment was something like  "Okay, what if AGI happens before we've understood the dark matter and dark energy? AGI has incomplete models of these concepts (Assuming that it's not able to develop a full picture from available data - that may well be the case, but for a placeholder, I'm using dark energy. It could be some other concept we only discover in the year prior to the AGI creation and have relatively fewer data about), and it has a choice to either use existing technology (or create better using existing principles), or carry out research into dark energy and see how it can be harnessed, given reasons to believe that the end-solution would be far more efficient than the currently possible solutions.  There might be types of data that we never bothered capturing which might've been useful or even essential for building a robust understanding of certain aspects of nature. It might pursue those data-capturing tasks, which might be bottlenecked by the amount of data needed, the time to collect data, etc (though far less than what humans would require)." Thank you for sharing the link. I had misunderstood what the point meant, but now I see. My speculation for the original comment was based on a naive understanding. This post you linked is excellent and I'd recommend everyone to give it a read. 
  • Solve protein folding problem
  • Acquire human DNA sample
  • Use superintelligence to construct a functional model of human biochemistry
  • Design a virus that exploits human biochemstry
  • Use one of the currently available biochemistry-as-a-service providers to produce a sample that incubates the virus and then escapes their safety procedures (e.g. pay someone to mix two vials sent to them in the mail. The aerosols from the mixing infect them)
5mukashi
* Solve protein folding problem Fine, no problems here. Up to certain level of accuracy I guess * Acquire human DNA sample Ok. Easy * Use superintelligence to construct a functional model of human biochemistry By this, I can deduce different things. One, that you assume that this is possible from points one and two. This is nonsense. There are millions of things that are not written in the DNA. Also, you don't need to acquire a human DNA sample, you just download a fasta file. But, to steelman your argument, let's say that the superintelligence builds a model of human biochemistry not based on the a human DNA sample but based on the corpus of biochemistry research, which is something that I find plausible. Up to certain level!!! I don't think that such a model would be flawless or even good enough, but fine * Design a virus that exploits human biochemstry Here I start having problems believing the argument. Not everything can be computed using simulations guys. The margin of error can be huge. Would you believe in a superintelligence capable of predicting the weather 10 years in advance? If not, what makes you think that creating a virus is an easier problem? * Use one of the currently available biochemistry-as-a-service providers to produce a sample that incubates the virus and then escapes their safety procedures (e.g. pay someone to mix two vials sent to them in the mail. The aerosols from the mixing infect them) Even if you succeed at this, and there hundreds of alarms that could go off in the meantime, how do you guarantee that the virus kills everyone? I am totally unconvinced by this argument

Hey, it's now officially no longer May 27th anywhere, and I can't find any announcements yet. How's it going?

Edit: Just got my acceptance letter! See you all this summer!

Sorry that automation is taking your craft. You're neither the first nor the last this will happen to. Orators, book illuminators, weavers, portrait artists, puppeteers, cartoon animators, etc. Even just in the artistic world, you're in fine company. Generally speaking, it's been good for society to free up labor for different pursuits while preserving production. The art can even be elevated as people incorporate the automata into their craft. It's a shame the original skill is lost, but if that kept us from innovating, there would be no way to get common... (read more)

4abramdemski
It's not just a question of automation eliminating skilled work. Deep learning uses the work of artists in a significant sense. There is a patchwork of law and social norms in place to protect artists, EG, the practice of explicitly naming major inspirations for a work. This has worked OK up to now, because all creative re-working of other art has either gone through relatively simple manipulation like copy/paste/caption/filter, or thru the specific route of the human mind taking media in and then producing new media output which takes greater or smaller amounts of inspiration from media consumed.  AI which learns from large amounts of human-generated content, is legitimately a new category here. It's not obvious what should be legal vs illegal, or accepted vs frowned upon by the artistic community.  Is it more like applying a filter to someone else's artwork and calling it your own? Or is it more like taking artistic inspiration from someone else's work? What kinds of credit are due?

Before learning about reversible computation only requiring work when bits are deleted I would have treated each of my points as roughly independent with about 10^1.5 , 10^4 , 10^4 , 10^2.5 odds against respectively. The last point is now down to 10^1.5 .

Dumping waste information in the baryonic world would be visible.

3Mitchell_Porter
Not if the rate is low enough and/or astronomically localized enough.  It would be interesting to make a model in which fuzzy dark matter is coupled to neutrinos, in a way that maximizes rate of quantum information transfer, while remaining within empirical bounds. 

#1 - Caution doesn't solve problems, it finds solutions if they exist. You can't use caution to ignore air resistance when building a rocket. (Though collapse is not necessarily expected - there's plenty of interstellar dust).

#4 - I didn't know about Landauer's principle, though going by what I'm reading, you're mistaken on its interpretation - it takes 'next to nothing' times the part of the computation you throw out, not the part you read out, where the part you throw out increases proportional to the negentropy you're getting. No free lunch, still, but ... (read more)

1Ilio
1-3: You are certainly right that cold and homogenous black matter is the scientific consensus right now (at least if by consensus we mean « most experts would either think that’s true or admit there is no data strong enough to convince most experts it’s wrong »). The point I’m trying to make is: as soon as we say « computronium » we are outside of normal science. In normal science, you don’t suppose matter can choose to deploy itself like a solar sail and use that to progressively reach outside regions of the galaxy where dangerous SN are less frequent. You suppose if it exists it has no aim, then find the best non-weird model that fits the data. In other words, I don’t think we can assume that the scientific consensus is automatically 10^4 or 10^8 strong evidence for « how sure are we that black matters is not a kind of matter that astrophysicist usually don’t botter to consider? », especially when the scientific consensus also includes « we need to keep spending ressources on figuring out what black matter is ». You do agree that’s also the scientific consensus, right? (And not just to keep labs open, but really to add data and visit and revisit new and old models because we’re still not sure what it is) 4: in the theory of purely reversible computation, the size of what you read dictates the size you must throw out. Your computation is however more sounded than the theory of pure reversible computation, because pure reversible computation may well be as impossible as perfectly analog computation. Now, suppose all black matters emits 0,16 mev/bit. How much computation per second and kilo would let the thermal radiation largely below our ability to detect it?

1:10^12 odds against the notion, easily. About as likely as the earth being flat.

  1. Dark matter does not interact locally with itself or visible matter. If it did, it would experience friction (like interstellar gas, dust and stars) and form into disk shapes when spiral galaxies form into disk shapes. A key observation of dark matter is that spiral galaxies' rotational velocity behaves as one would expect from an ellipsoid.
  2. The fraction of matter that is dark does not change over time, nor does the total mass of objects in the universe. Sky surveys do not find
... (read more)
6Mitchell_Porter
How did you get this figure? Two one-in-a-million implausibilities?  Quantum computers are close to reversible. Each halo could be a big quantum coherent structure, with e.g. neutrinos as ancillary qubits. The baryonic world might be where the waste information gets dumped. :-) 
1Ilio
Contra #1: Imagine you order a huge stack of computers for massive multiplayers game purpose. Would you expect it might collapse under it’s own weight, or would you expect the builders to be cautious enough that it won’t collapse like passive dust in free fall? Contra #4: nope. Landauer’s principle implicates that reversible computation cost nothing (until you’d want to read the result, which then cost next to nothing time the size of the result you want to read, irrespective of the size of the computation proper). Present day computers are obviously very far from this limit, but you can’t assume « computronium » is too. #2 and #3 sounds stronger, imo. Could you provide a glimpse of the confidence intervals and how it varies from one survey to the next?

Unlike what you would expect with black holes, we can see that the Boötes void contains very little mass by looking for gravitational lensing and the movement of surrounding galaxies.

Answer by Daphne_W150

On the SLOAN webpage, there's a list of ongoing and completed surveys, some of which went out to z=3 (10 billion years ago/away), though the more distant ones didn't use stellar emissions as output. Here is a youtube video visualizing the data that eBOSS (a quasar study) added in 2020, but it shows it alongside visible/near-infrared galaxy data (blue to green datasets), which go up to about 6 billion years. Radial variations in density in the observed data can be explained by local obstructions (the galactic plane, gas clouds, nearby galaxies), while radia... (read more)

There definitely seem to be (relative) grunt work positions in AI safety, like this, this or this. Unless you think these are harmful, it seems like it would be better to direct the Alec-est Alecs of the world that way instead of risking them never contributing.

I understand not wanting to shoulder responsibility for their career personally, and I understand wanting an unbounded culture for those who thrive under those conditions, but I don't see the harm in having a parallel structure for those who do want/need guidance.

3TekhneMakre
That seems maybe right if Alec isn't *interested* in helping in non-"grunt" ways. (TBC "grunt" stuff can be super important; it's just that we seem much more bottlenecked on 1. non-grunt stuff, and 2. grunt stuff for stuff that's too weird for people like this to decide to work on.) I'm also saying that Alec might end up being able and willing to help in non-grunt ways, but not by taking orders, and rather by going off and learning how to do non-grunt stuff in a context with more clear feedback. It could be harmful to Alec to give him orders to work on "grunt" stuff, for example by playing in to his delusion that doing some task is crucially important for the world not ending, which is an inappropriate amount of pressure and stress and more importantly probably is false. It could potentially be harmful of Alec if he's providing labor for whoever managed to gain control of the narrative via fraud, because then fraudsters get lots of labor and are empower to do more fraud. It could be harmful of Alec if he feels he has to add weight to the narrative that what he's doing matters, thereby amplifying information cascades.

Well, it's better, but in I think you're still playing into [Alec taking things you say as orders], which I claim is a thing, so that in practice Alec will predictably systematically be less helpful and more harmful than if he weren't [taking things you say as orders].

There seems to be an assumption here that Alec would do something relatively helpful instead if he weren't taking the things you say as orders. I don't think this is always the case: for people who aren't used to thinking for themselves, the problem of directing your career to reduce AI risk ... (read more)

2TekhneMakre
Good point. My guess is that if Alec is sufficiently like this, the right thing to do is to tell Alec not to work on AI risk for now. Instead, Alec, do other fun interesting things that matter to you; especially, try looking for things that you're interested in / matter to you apart from social direction / feedback (and which aren't as difficult as AI safety); and stay friends with me, if you like.

This is where things go wrong. The actual credence of seeing a hypercomputer is zero, because a computationally bounded observer can never observe such an object in such a way that differentiates it from a finite approximation. As such, you should indeed have a zero percent probability of ever moving into a state in which you have performed such a verification, it is a logical impossibility. Think about what it would mean for you, a computationally bounded approximate bayesian, to come into a state of belief that you are in possession of a hypercomputer (a

... (read more)

That's not a middle ground between a good world and a neutral world, though, that's just another way to get a good world. If we assume a good world is exponentially unlikely, a 10 year delay might mean the odds of a good world rise from 10^-10 to 10^-8 (as opposed to pursuing Clippy bringing the odds of a bad world down from 10^-4 to 10^-6 ).

If you disagree with Yudkowsky about his pessimism about the probability of good worlds, then my post doesn't really apply. My post is about how to handle him being correct about the odds.

That's a fair point - my model does assume AGI will come into existence in non-negative worlds. Though I struggle to actually imagine a non-negative world where humanity is alive a thousand years from now and AGI hasn't been developed. Even if all alignment researchers believed it was the right thing to pursue, which doesn't seem likely.

1Evan R. Murphy
Even a 5~10 year delay in AGI deployment might give enough time to solve the alignment problem.
Load More