All of gilch's Comments + Replies

gilch30

See https://pauseai.info. They think lobbying efforts have been more successful than expected, but politicians are reluctant to act on it before they hear about it from their constituents. Individuals sending emails also helps more than expected. The more we can create common knowledge of the situation, the more likely the government acts.

2ChristianKl
Under the Biden administration lobbying efforts had some success. In the last weeks, the Trump administration undid all the efforts of the Biden administration. Especially, with China making progress with DeepSeek, it's unlikely that you can convince the Trump administration to slow down AI.
gilch20

It's also available on Android.

gilch20

Finitism doesn't reject the existence of any given natural number (although ultrafinitism might), nor the validity of the successor function (counting), nor even the notion of a "potential" infinity (like time), just the idea of a completed one being an object in its own right (which can be put into a set). The Axiom of Infinity doesn't let you escape the notion of classes which can't themselves be an element of a set. Set theory runs into paradoxes if we allow it. Is it such an invalid move to disallow the class of Naturals as an element of a set, when ev... (read more)

1quwgri
I don't quite understand how actual infinity differs from potential infinity in this context. Time in ToR is considered one of the dimensions of space. How can space be considered "potential infinity"? It subjectively looks like that to a forward-traveling observer. But usually we use the paradigm of objective reality, where everything is assumed to exist equally. Together with the past and the future, if we recall ToR again. Are we supposed to have a special case here, where we need to switch to the paradigm of subjective reality? I am familiar with the idea that "the information that enables us to act best is true", but it seems to me to be just a beautiful phrase, because in most cases, in order to develop a model that enables us to act best, we still have to be guided by "truth" in the old, ordinary sense. That is, we obtain some initial "atoms of truth" through experience, but later we have to take care of their logical consistency. And we are not quite right to call some high-level construction "truth", even if it works well, if it does not logically agree with the "atoms" we used to create it. This case is free from this problem, since practical verification in this area is impossible. But still, the feeling of some hypocrisy in front of oneself does not disappear. To admit at least in the edge of consciousness the possibility that "the Universe is not X" and at the same time use only "the Universe is X" in calculations - there is some kind of contradiction in this. This is either an act of doublethink (for an agnostic) or an act of politeness (for an ultrafinitist). The difference between repeating patterns could manifest itself if there were interactions between them. But here reality rather speaks in favor of ultrafinitism. If the hierarchy of complication of structures with interactions could continue to infinity (even at the cost of slowing down the interactions), then theoretically we could find ourselves at any level of the hierarchy. Then we most l
gilch20

I installed Mindfulness Bell on my phone, and every time it chimes, I ask myself, "Should I be doing something else right now?" Sometimes I'm being productive and don't need to stop. When I notice I've started ignoring it, I change the chime sound so I notice it again. The interval is adjustable. If I'm stuck scrolling social media, this often gives me the opportunity to stop. Doesn't always work though. I also have it turned off at night so I can sleep. This is a problem if I get stuck on social media at night when I should be sleeping. Instead, after bed time, I progressively dim the lights and screen to the point where I can barely read it. That's usually enough to let me fall asleep.

gilch20

I'm hearing intuitions, not arguments here. Do you understand Cantor's Diagonalization argument? This proves that the set of all integers is "smaller" (in a well-defined way) than the set of all real numbers, despite the set of all integers being already infinite in size. And it doesn't end there. There is no largest set.

Russell's paradox arises when a set definition refers to itself. For example, in a certain town, the barber is the one who shaves all those (and only those) who do not shave themselves. This seems to make sense its face. But who shaves the... (read more)

1quwgri
We can leave theology. It is not so important. I am more concerned with the questions of finitism and infinitism in relation to paradox of sets. Finitism is logically consistent. However, it seems to me that it suffers from the same problem as the ontological proof of the existence of God. It is an attempt to make a global prediction about the nature of the Universe based on a small thought experiment. Predictions like "Time cannot be infinite", "Space cannot be infinite" follow directly from finitism. It turns out that we make these predictions based on our mathematical problems with the paradox of sets. At the same time, the paradox of sets itself resembles the paradox "I'm telling a lie now". and, it seems, should look for a solution somewhere in the same area. If we think off the cuff, it seems to me naively that the very concept of "ordinary set" is composed in such a way as to lead to paradoxes. This is the problem of the concept of "ordinary set". This is not the problem of the existence/non-existence of physical infinity. Oh, okay. I don't really understand this topic. But as far as I know, not all mathematicians are finitists. So it seems that the proofs of finitism are not flawless. On the other hand, how is the problem of the set paradox solved in cosmological infinitism? Something like "The Infinite Universe may exist, but it is forbidden to talk about it as an object"? Because any attempt to do so will bring you back to the set paradox, if you take it seriously. "Talk about any particular part of the Universe as much as you like, but don't even think about the Universe as a whole"? This risks forming a somewhat patchwork model of the worldview. "It may exist, but you cannot think about it intelligently and rationally." One is reminded of Zeno's attempts to prove that one cannot think about motion without contradictions.
gilch34

The main idea here is that one can always derive a "greater" set (in terms of cardinality) from any given set, even if the given set is already infinite, because there are higher degrees of infinity. There is no greatest infinity, just like there is no largest number. So even if (hypothetically) a Being with infinite knowledge exists, there could be Beings with greater knowledge than that. No matter which god you choose, there could be one greater than that, meaning there are things the god you chose doesn't know (and hence He isn't "omniscient", and there... (read more)

1quwgri
"An infinite universe can exist." "A greatest infinity cannot exist." I think there is some kind of logical contradiction here. If the Universe exists and if it is infinite, then it must correspond to the concept of "the greatest infinity." True, Bertrand Russell once expressed doubt that one can correctly reason about the "Universe as a whole." I don't know. It seems strange to me. As if we recognize the existence of individual things, but not of all things as a whole. It seems like some kind of arbitrary crutch, a private "ad hoc" solution, conditioned by the weakness of our brain. As for God or Gods, then, hypothetically, in the case of the coincidence of their value systems and the mental interaction between them according to a common agreed protocol, these problems should not be very important.
gilch30

I don't know of any officially sanctioned way. But, hypothetically, meeting a publicly-known real human person in person and giving them your public pgp key might work. Said real human could vouch for you and your public key, and no one else could fake a message signed by you, assuming you protect your private key. It's probably sufficient to sign and post one message proving this is your account (profile bio, probably), and then we just have to trust you to keep your account password secure.

gilch20

Would it help if we wore helmets?

2Adam Zerner
I assume you mean wearing a helmet while being in a car to reduce the risk of car related injuries and deaths. I actually looked into this and from what I remember, helmets do more harm than good. They have the benefit of protecting you from hitting your head against something but the issue with accidents comes much moreso from the whiplash, and by adding more weight to (the top of) your head, helmets have the cost of making whiplash worse, and this cost outweighs the benefits by a fair amount.
gilch30

Hissp v0.5.0 is up.

python -m pip install hissp

If you always wanted to learn about Lisp macros, but only know Python, try the Hissp macro tutorials.

1Shankar Sivarajan
Is this a Lisp-to-Python transpiler? 
gilch52

That seems to be getting into Game Theory territory. One can model agents (players) with different strategies, even suboptimal ones. A lot of the insight from Game Theory isn't just about how to play a better strategy, but how changing the rules affects the game.

gilch40

Not sure I understand what you mean by that. The Universe seems to follow relatively simple deterministic laws. That doesn't mean you can use quantum field theory to predict the weather. But chaotic systems can be modeled as statistical ensembles. Temperature is a meaningful measurement even if we can't calculate the motion of all the individual gas molecules.

If you're referring to human irrationality in particular, we can study cognitive bias, which is how human reasoning diverges from that of idealized agents in certain systematic ways. This is a topic o... (read more)

1yc
Thanks, I was thinking of the latter more (human irrationality), but found your first part still interesting. I understand irrationality was studied in psychology and economics, and I was wondering on the modeling of irrationality particularly, for 1-2 players, but also for a group of agents. For example, there are arguments saying for a group of irrational agents, the group choice could be rational depending on group structure etc. On individual irrationality and continued group irrationality, I think we would need to estimate the level of (and prevalence of ) irrationality in some way that captures unconscious preferences, or incomplete information. How to best combine these? Maybe it would just be just more data driven.
gilch42

It's short for "woo-woo", a derogatory term skeptics use for magical thinking.

I think the word originates as onomatopoeia from the haunting woo-woo Theremin sounds played in black-and-white horror films when the ghost was about to appear. It's what the "supernatural" sounds like, I guess.

It's not about the belief being unconventional as much as it being irrational. Just because we don't understand how something works doesn't mean it doesn't work (it just probably doesn't), but we can still call your reasons for thinking so invalid. A classic skeptic might ... (read more)

3ZY
Ah thanks. Do you know why these former rationalists were "more accepting" of irrational thinking? And to be extremely clear, does "irrational" here mean not following one's preference with their actions, and not truth seeking when forming beliefs?
gilch40

Use a smart but not self-improving AI agent to antagonize the world with the goal of making advanced societies believe that AGI is a bad idea and precipitating effective government actions. You could call this the Ozymandias approach.

ChaosGPT already exists. It's incompetent to the point of being comical at the moment, but maybe more powerful analogues will appear and wreak havoc. Considering the current prevalence of malware, it might be more surprising if something like this didn't happen.

We've already seen developments that could have been considere... (read more)

gilch3-1

We have already identified some key resources involved in AI development that could be restricted. The economic bottlenecks are mainly around high energy requirements and chip manufacturing.

Energy is probably too connected to the rest of the economy to be a good regulatory lever, but the U.S. power grid can't currently handle the scale of the data centers the AI labs want for model training. That might buy us a little time. Big tech is already talking about buying small modular nuclear reactors to power the next generation of data centers. Those probably w... (read more)

gilch42

I do not really understand how technical advance in alignment realistically becomes a success path. I anticipate that in order for improved alignment to be useful, it would need to be present in essentially all AI agents or it would need to be present in the most powerful AI agent such that the aligned agent could dominate other unaligned AI agents.

The instrumental convergence of goals implies that a powerful AI would almost certainly act to prevent any rivals from emerging, whether aligned or not. In the intelligence explosion scenario, progress would... (read more)

gilch20

How about "bubble lighting" then?

1Radford Neal
How about "AI scam"? You know, something people will actually understand.  Unlike "gas lighting", for example, which is an obscure reference whose meaning cannot be determined if you don't know the reference.
gilch50

The forms of approaches that I expected to see but haven’t seen too much of thus far are those similar to the one that you linked about STOP AI. That is, approaches that would scale with the addition of approximately average people.

Besides STOP AI, there's also the less extreme PauseAI. They're interested in things like lobbying, protests, lawsuits, etc.

gilch40

I presume that your high P(doom) already accounts for your estimation of the probability of government action being successful. Does your high P(doom) imply that you expect these to be too slow, or too ineffective?

Yep, most of my hope is on our civilization's coordination mechanisms kicking in in time. Most of the world's problems seem to be failures to coordinate, but that's not the same as saying we can't coordinate. Failures are more salient, but that's a cognitive bias. We've achieved a remarkable level of stability, in the light of recent history.... (read more)

5Lerk
This is where most of my anticipated success paths lie as well. I do not really understand how technical advance in alignment realistically becomes a success path.  I anticipate that in order for improved alignment to be useful, it would need to be present in essentially all AI agents or it would need to be present in the most powerful AI agent such that the aligned agent could dominate other unaligned AI agents.  I don’t expect uniformity of adoption and I don’t necessarily expect alignment to correlate with agent capability.  By my estimation, this success path rests on the probability that the organization with the most capable AI agent is also specifically interested in ensuring alignment of that agent.  I expect these goals to interfere with each other to some degree such that this confluence is unlikely.  Are your expectations different? I have not been thinking deeply in the direction of a superintelligent AGI having been achieved already.  It certainly seems possible.  It would invalidate most of the things I have thus far thought of as plausible mitigation measures. Assuming a superintelligent AGI does not already exist, I would expect someone with a high P(doom) to be considering options of the form: Use a smart but not self-improving AI agent to antagonize the world with the goal of making advanced societies believe that AGI is a bad idea and precipitating effective government actions.  You could call this the Ozymandias approach. Identify key resources involved in AI development and work to restrict those resources.  For truly desperate individuals this might look like the Metcalf attack, but a tamer approach might be something more along the lines of investing in a grid operator and pushing to increase delivery fees to data centers. I haven’t pursued these thoughts in any serious way because my estimation of the threat isn’t as high as yours.  I think it is likely we are unintentionally heading toward the Ozymandias approach anyhow.
gilch10

Protesters are expected to be at least a little annoying. Strategic unpopularity might be a price worth paying if it gets results. Sometimes extremists shift the Overton Window.

gilch32

I mean, yes, hence my comment about ChatGPT writing better than this, but if word gets out that Stop AI is literally using the product of the company they're protesting in their protests, it could come off as hypocrisy.

I personally don't have a problem with it, but I understand the situation at a deeper level than the general public. It could be a wise strategic move to hire a human writer, or even ask for competent volunteer writers, including those not willing to join the protests themselves, although I can see budget or timing being a factor in the decision.

Or they could just use one of the bigger Llamas on their own hardware and try to not get caught. Seems like an unnecessary risk though.

3Remmelt
No worries. We won't be using ChatGPT or any other model to generate our texts.
gilch1924

The press release strikes me as poorly written. It's middle-school level. ChatGPT can write better than this. Exactly who is your (Stop AI's) audience here? "The press"?

Exclamation points are excessive. "Heart's content"? You're not in this for "contentment". The "you can't prove it, therefore I'm right" argument is weak. The second page is worse. "Toxic conditions"? I think I know what you meant, but you didn't connect it well enough for a general audience. "accelerate our mass extinction until we are all dead"? I'm pretty sure the "all dead" part has to ... (read more)

gilch31

The mods probably have access to better analytics. I, for one, was a long-time lurker before I said anything.

gilch101

My current estimate of P(doom) in the next 15 years is 5%. That is, high enough to be concerned , but not high enough to cash out my retirement. I am curious about anyone harboring a P(doom) > 50%. This would seem to be high enough to support drastic actions. What work has been done to develop rational approaches to such a high P(doom)?

I mean, what do you think we've been doing all along?

I'm at like 90% in 20 years, but I'm not claiming even one significant digit on that figure. My drastic actions have been to get depressed enough to be unwilling ... (read more)

6Lerk
  So, the short answer is that I am actually just ignorant about this.  I’m reading here to learn more but I certainly haven’t ingested a sufficient history of relevant works.  I’m happy to prioritize any recommendations that others have found insightful or thought provoking, especially from the point of view of a novice.   I can answer the specific question “what do I think” in a bit more detail.  The answer should be understood to represent the viewpoint of someone who is new to the discussion and has only been exposed to an algorithmically influenced, self-selected slice of the information.   I watched the Lex Fridman interview of Eliezer Yudkowsky and around 3:06 Lex asks about what advice Eliezer would give to young people.  Eliezer’s initial answer is something to the extent of “Don’t expect a long future.”  I interpreted Eliezer’s answer largely as trying to evoke a sense of reverence for the seriousness of the problem.  When pushed on the question a bit further, Eliezer’s given answer is “…I hardly know how to fight myself at this point.”  I interpreted this to mean that the space of possible actions that is being searched appears intractable from the perspective of a dedicated researcher.  This, I believe, is largely the source of my question.  Current approaches appear to be losing the race, so what other avenues are being explored?   I read the “Thomas Kwa's MIRI research experience” discussion and there was a statement to the effect that MIRI does not want Nate’s mindset to be known to frontier AI labs.  I interpreted this to mean that the most likely course being explored at MIRI is to build a good AI to preempt or stop a bad AI.  This strikes me as plausible because my intuition is that the LLM architectures being employed are largely inefficient for developing AGI.  However, the compute scaling seems to work well enough that it may win the race before other competing ideas come to fruition.   An example of an alternative approach that I read
gilch62

On a related topic, I am looking to explore how to determine the right scale of the objective function for revenge (or social correction if you prefer a smaller scope). My intuition is that revenge was developed as a mechanism to perform tribal level optimizations. In a situation where there has been a social transgression, and redressing that transgression would be personally costly but societally beneficial, what is the correct balance between personal interest and societal interest?

This is a question for game theory. Trading a state of total anarch... (read more)

gilch50

at the personal scale it might yield the decision that one should go work in finance and accrue a pile of utility. But if you apply instrumental rationality to an objective function at the societal scale it might yield the decision to give all your spare resources to the most effective organizations you can find.

Yes. And yes. See You Need More Money for the former, Effective Altruism for the latter, and Earning to give for a combination of the two.

As for which to focus on, well, Rationality doesn't decide for you what your utility function is. That's on... (read more)

gilch70

The questions seem underspecified. You're haven't nailed down a single world, and different worlds could have different answers. Many of the laws of today no longer make sense in worlds like you're describing. They may be ignored and forgotten or updated after some time.

If we have the technology to enhance human memory for perfect recall, does that violate copyright, since you're recording everything? Arguably, it's fair use to remember your own life. Sharing that with others gets murkier. Also, copyright was originally intended to incentivize creation. Do... (read more)

gilch102

What we'd ask depends on the context. In general, not all rationalist teachings are in the form of a question, but many could probably be phrased that way.

"Do I desire to believe X if X is the case and not-X if X is not the case?" (For whatever X in question.) This is the fundamental lesson of epistemic rationality. If you don't want to lie to yourself, the rest will help you get better at that. But if you do, you'll lie to yourself anyway and all your acquired cleverness will be used to defeat itself.

"Am I winning?" This is the fundamental lesson of instr... (read more)

gilch*20

I think #1 implies #2 pretty strongly, but OK, I was mostly with you until #4. Why is it that low? I think #3 implies #4, with high probability. Why don't you?

#5 and #6 don't seem like strong objections. Multiple scenarios could happen multiple times in the interval we are talking about. Only one has to deal the final blow for it to be final, and even blows we survive, we can't necessarily recover from, or recover from quickly. The weaker civilization gets, the less likely it is to survive the next blow.

We can hope that warning shots wake up the world enou... (read more)

gilch*20

I don't really have a problem with the term "intelligence" myself, but I see how it could carry anthropomorphic baggage for some people. However, I think the important parts are, in fact, analogous between AGI and humans. But I'm not attached to that particular word. One may as well say "competence" or "optimization power" without losing hold of the sense of "intelligence" we mean when we talk about AI.

In the study of human intelligence, it's useful to break down the g factor (what IQ tests purport to measure) into fluid and crystallized intelligence. The ... (read more)

gilch20

Strong ML engineering skills (you should have completed at least the equivalent of a course like ARENA).

What other courses would you consider equivalent?

2Jesse Hoogland
To be clear, I don't care about the particular courses, I care about the skills. 
gilch20

I don't know why we think we can colonize Mars when we can't even colonize Alaska. Alaska at least has oxygen. Where are the domed cities with climate control?

It's not that we can't colonise Alaska, it's that it's not economically productive to do so.

I wouldn't expect colonising mars to be economically productive, but instead to be funded by other sources (essentially charity).

gilch50

Specifically, while the kugelblitz is a prediction of general relativity, quantum pair production from strong electric fields makes it infeasible in practice. Even quasars wouldn't be bright enough, and those are far beyond the energy level of a single Dyson sphere. This doesn't rule out primordial black holes forming at the time of the Big Bang, however.

It might still be possible to create micro black holes with particle accelerators, but how easy this is depends on some unanswered questions about physics. In theory, such an accelerator might need to be a... (read more)

Answer by gilch*231

What are efficient Dyson spheres probably made of?

There are many possible Dyson sphere designs, but they seem to fall into three broad categories: shells, orbital swarms, and bubbles. Solid shells are probably unrealistic. Known materials aren't strong enough. Orbital swarms are more realistic but suffer from some problems with self-occlusion and possibly collisions between modules. Limitations on available materials might still make this the best option, at least at first.

But efficient Dyson spheres are probably bubbles. Rather than being made of satel... (read more)

gilch*52

Rob Miles' YouTube channel has some good explanations about why alignment is hard.

We can already do RLHF, the alignment technique that made ChatGPT and derivatives well-behaved enough to be useful, but we don't expect this to scale to superintelligence. It adjusts the weights based on human feedback, but this can't work once the humans are unable to judge actions (or plans) that are too complex.

If we don't mind the process being slow, this could be achieved by a single "crawler" machine that would go through the matrix field by field and do the updates.

... (read more)
gilch42

Get a VPN. It's good practice when using public Wi-Fi anyway. (Best practice is to never use public Wi-Fi. Get a data plan. Tello is reasonably priced.) Web filters are always imperfect, and I mostly object to them on principle. They'll block too little or too much, or more often a mix of both, but it's a common problem in e.g. schools. Are you sure you're not accessing the Wi-Fi of the business next door? Maybe B&N's was down.

gilch62

Takeover, if misaligned, also counts as doom. X-risk includes permanent disempowerment, not just literal extinction. That's according to Bostrom, who coined the term:

One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

A reasonably good outcome might be for ASI to set some guardrails to prevent death and disasters (like other black marbles) and then mostly leave us alone.

My understanding is that Neuralink is a bet on "cyborgism". It doesn't look like it will make it... (read more)

gilch30

There's some UC San Francisco research to back up this view. California has the nation's biggest homeless population mainly due to unaffordable housing, not migration from elsewhere for a nicer climate.

gilch50

I notice that https://metaphor.systems (mentioned here earlier) now redirects to Exa. Have you compared it to Phind (or Bing/Windows Copilot)?

2papetoast
Metaphor rebranded themselves. No and no, thanks for sharing though, will try it out!
gilch*52

Relatively healthy people do occasionally become homeless due to misfortune, but they usually don't stay homeless. It could be someone from the lower class living paycheck to paycheck who has a surprise expense they're not insured for and can't make rent. It could be a battered woman and her children escaping domestic abuse. They get services, they get back on their feet, they get housed. Ideally, the social safety nets would work faster and better than they do in practice, but the system basically works out for them.

The persistently homeless are a differe... (read more)

gilch*52

Why do we need more public bathrooms? I'm skeptical because if there was demand for more bathrooms, then I'd expect the market to produce them.

I wouldn't expect so, why would you think that? Markets have a problem handling unpriced externalities without regulation. (Tragedy of the commons.) Pollution is a notable example of market failure, and the bathrooms issue is a special case of exactly this. Why pay extra to dispose of your waste properly if you can get away with dumping it elsewhere? As a matter of public health, it's better for everyone if this... (read more)

3eye96458
From your response it seems to me that I've understood your question and position, so I'm responding to it here. epistemic status: I am a public policy and economics amateur.  I do not have extreme cognitive ability and I thought about the question for < 1 hour. I'm going to suggest some other possible ways to stop homeless people from shitting in the streets and then I will nominate my current preferred solution. 1. Remove legal restrictions to running just-bathroom businesses. 2. Reduce the number of homeless people (by, for example, giving them homes and/or letting developers build more homes). 3. Start a charity that operates bathrooms for the homeless. My current preference is a mix of (a) punishing people who shit in the streets with jail time, (b) reducing the number of homeless by facilitating more housing development, (c) and removing legal restrictions on running just-bathroom businesses. AFAICT I prefer my solution to yours because I am wary of the San Francisco Division of Public Bathrooms turning into a permanent boondoggle (I'm generally suspicious of government activity, although I do accept that, for example, the Apollo Program and Manhattan Project are very impressive, and IMO most American police departments do an okay job.) and because I suspect the situation is being heavily influenced by anti-housing-development policies and anti-just-bathroom-businesses policies. If you have a good critique of my solution, please offer it. As I said, I'm a public policy noob.
1eye96458
I explained my reasoning here.  Also note that most people who have demand for using the bathroom are not penniless homeless people. I agree. A self-interested rational agent would just shit in the streets if they could get away with it. I agree. I understand you to be raising the question, "What is the best way to stop homeless people from shitting in the streets?".  And then you've suggested four possible solutions: 1. Government operates more free to use bathrooms. 2. Government pays private businesses to make their bathrooms available to everyone. 3. Government fines people who shit in the streets. 4. Governments jails people who shit in the streets. And you claim that (1) and (2) are the best options. Do I understand you correctly?
gilch20

The table isn't legible with LessWrong's dark theme.

gilch122

I feel like these would be more effective if standardized, dated and updated. Should we also mention gag orders? Something like this?

As of June 2024, I have signed no contracts or agreements whose existence I cannot mention.
As of June 2024, I am not under any kind of gag order whose existence I cannot mention.
Last updated June 2024. I commit to updating at least annually.

Could LessWrong itself be compelled even if the user cannot? Should we include PGP signatures or something?

gilch1-1

I thought it was mostly due to the high prevalence of autism (and the social anxiety that usually comes with it) in the community. The more socially agentic rationalists are trying.

gilch20

But probably he should be better at communication e.g. realizing that people will react negatively to raising the possibility of nuking datacenters without lots of contextualizing.

Yeah, pretty sure Eliezer never recommended nuking datacenters. I don't know who you heard it from, but this distortion is slanderous and needs to stop. I can't control what everybody says elsewhere, but it shouldn't be acceptable on LessWrong, of all places.

He did talk about enforcing a global treaty backed by the threat of force (because all law is ultimately backed by viole... (read more)

4Matthew Barnett
Most international treaties are not backed by military force, such as the threat of airstrikes. They're typically backed by more informal pressures, such as diplomatic isolation, conditional aid, sanctions, asset freezing, damage to credibility and reputation, and threats of mutual defection (i.e., "if you don't follow the treaty, then I won't either"). It seems bad to me that Eliezer's article incidentally amplified the idea that most international treaties are backed by straightforward threats of war, because that idea is not true.
2Thomas Kwa
Thanks, fixed.
gilch20

The argument chain you presented (Deep Learning -> Consciousness -> AI Armageddon) is a strawman. If you sincerely think that's our position, you haven't read enough. Read more, and you'll be better received. If you don't think that, stop being unfair about what we said, and you'll be better received.

Last I checked, most of us were agnostic on the AI Consciousness question. If you think that's a key point to our Doom arguments, you haven't understood us; that step isn't necessarily required; it's not a link in the chain of argument. Maybe AI can be d... (read more)

gilch31

You are not wrong to complain. That's feedback. But this feels too vague to be actionable.

First, we may agree on more than you think. Yes, groupthink can be a problem, and gets worse over time, if not actively countered. True scientists are heretics.

But if the science symposium allows the janitor to interrupt the speakers and take all day pontificating about his crackpot perpetual motion machine, it's also of little value. It gets worse if we then allow the conspiracy theorists to feed off of each other. Experts need a protected space to converse, or we're... (read more)

3jeffreycaruso
Your example of the janitor interrupting the scientist is a good demonstration of my point. I've organized over a hundred cybersecurity events featuring over a thousand speakers and I've never had a single janitor interrupt a talk. On the other hand, I've had numerous "experts" attempt to pass off fiction as fact, draw assumptions from faulty data, and generally behave far worse than any janitor might due to their inflated egos.  Based on my conversations with computer science and philosophy professors who aren't EA-affiliated, and several who are, their posts are frequently down-voted simply because they represent opposite viewpoints.  Do the moderators of this forum do regular assessments to see how they can make improvements in the online culture so that there's more diversity in perspective? 
gilch*184

the problem "How do we stop people from building dangerous AIs?" was "research how to build AIs".

Not quite. It was to research how to build friendly AIs. We haven't succeeded yet. What research progress we have made points to the problem being harder than initially thought, and capabilities turned out to be easier than most of us expected as well.

Methods normal people would consider to stop people from building dangerous AIs, like asking governments to make it illegal to build dangerous AIs, were considered gauche.

Considered by whom? Rationalists? T... (read more)

Load More