All of mako yass's Comments + Replies

I'm also hanging out a lot more with normies these days and I feel this.

But I also feel like maybe I just have a very strong local aura (or like, everyone does, that's how scenes work) which obscures the fact that I'm not influencing the rest of the ocean at all.

I worry that a lot of the discourse basically just works like barrier aggression in dogs. When you're at one of their parties, they'll act like they agree with you about everything, when you're seen at a party they're not at, they forget all that you said and they start baying for blood. Go back to... (read more)

I'm saying they (at this point) may hold that position for (admirable, maybe justifiable) political rather than truthseeking reasons. It's very convenient. It lets you advocate for treaties against racing. It's a lovely story where it's simply rational for humanity to come together to fight a shared adversary and in the process somewhat inevitably forge a new infrastructure of peace (an international safety project, which I have always advocated for and still want) together. And the alternative is racing and potentially a drone war between major powers and... (read more)

1Severin T. Seehrich
Huh, that's a potentially significant update for me. Two questions: 1. Can you give me a source for the claim that making the models incapable of deception seems likely to work? I managed to miss that so far. 2. What do you make of Gradual Disempowerment? Seems to imply that even successful technical alignment might lead to doom.

In watching interactions with external groups, I'm... very aware of the parts of our approach to the alignment problem that the public, ime, due to specialization being a real thing, actually cannot understand, so success requires some amount of uh, avoidance. I think it might not be incidental that the platform does focus (imo excessively) on more productive, accessible common enemy questions like control and moratorium, ahead of questions like "what is CEV and how do you make sure the lead players implement it". And I think to justify that we've been for... (read more)

3Severin T. Seehrich
So you think the alignment problem is solvable within the time we appear to have left? I'm very sceptical about that, and that makes me increasingly prone to believe that CEV, at this point in history, genuinely is not a relevant question. Which appears to be a position a number of people in PauseAI hold.
mako yass11-5

Rationalist discourse norms require a certain amount of tactlessness, saying what is true even when the social consequences of saying it are net negative. Politics (in the current arena) requires some degree of deception or at least complicity with bias (lies by ommision, censorship/nonpropagation of inconvenient counterevidence). 

Rationalist forum norms essentially forbid speaking in ways that're politically effective. Those engaging in political outreach would be best advised to read lesswrong but never comment under their real name. If they have go... (read more)

I don't think that effective politics in this case requires deception and deception often backfires in unexpected ways.

Gabriel and Connor suggest in their interview that radical honesty - genuinely trusting politicians, advisors and average people to understand your argument and recognizing that they also don't want to die from ASI - can be remarkably effective. The real problem may be that this approach is not attempted enough. I remember this as a slightly less but still positive datapoint https://www.lesswrong.com/posts/2sLwt2cSAag74nsdN/speaking-to-con... (read more)

For the US to undertake such a shift, it would help if you could convince them they'd do better in a secret race than an open one. There are indications that this may be possible, and there are indications that it may be impossible.

I'm listening to an Ecosystemics Futures podcast episode, which, to characterize... it's a podcast where the host has to keep asking guests whether the things they're saying are classified or not just in case she has to scrub it. At one point, Lue Elizondo does assert, in the context of talking to a couple of other people who kn... (read more)

2RHollerith
Good points, which in part explains why I think it is very very unlikely that AI research can be driven underground (in the US or worldwide). I was speaking to the desirability of driving it underground, not its feasibility.
  • I'll change a line early on in the manual to "Objects aren't common, currently. It's just corpses for now, which are explained on the desire cards they're relevant to and don't matter otherwise". Would that address it? (the card is A Terrible Hunger, which also needs to be changed to "a terrible hunger.\n4 points for every corpse in your possession at the end (killing generally always leaves a corpse, corpses can be carried; when agents are in the same land as a corpse, they can move it along with them as they move)")
  • What's this in response to?
  • Latter. Unsu
... (read more)
2Gunnar_Zarncke
Thanks for the clarifications! The second referred to holes in the landscape mentioned in the post, not in the rules.

I briefly glanced at wikipedia and there seemed to be two articles supporting it. This one might be the one I'm referring to (if not, it's a bonus) and this one seems to suggest that conscious perception has been trained.

I think unpacking that kind of feeling is valuable, but yeah it seems like you've been assuming we use decision theory to make decisions, when we actually use it as an upper bound model to derive principles of decisionmaking that may be more specific to human decisionmaking, or to anticipate the behavior of idealized agents, or (the distinction between CDT and FDT) as an allegory for toxic consequentialism in humans.

mako yass120

I'm aware of a study that found that the human brain clearly responds to changes in direction of the earth's magnetic field (iirc, the test chamber isolated the participant from the earth's field then generated its own, then moved it, while measuring their brain in some way) despite no human having ever been known to consciously perceive the magnetic field/have the abilities of a compass.

So, presumably, compass abilities could be taught through a neurofeedback training exercise.

I don't think anyone's tried to do this ("neurofeedback magnetoreception" finds no results)

But I guess the big mystery is why don't humans already have this.

4Alexander Gietelink Oldenziel
I've heard of this extraordinary finding. As for any extraordinary evidence, the first question should be: is the data accurate? Does anybody know if this has been replicated?
mako yass3-2

A relevant FAQ entry: AI development might go underground

I think I disagree here:

By tracking GPU sales, we can detect large-scale AI development. Since frontier model GPU clusters require immense amounts of energy and custom buildings, the physical infrastructure required to train a large model is hard to hide.

This will change/is only the case for frontier development. I also think we're probably in the hardware overhang. I don't think there is anything inherently difficult to hide about AI, that's likely just a fact about the present iteration of AI.

But I... (read more)

Answer by mako yass215

Personally, because I don't believe the policy in the organization's name is viable or helpful.

As to why I don't think it's viable, it would require the Trump-Vance administration to organise a strong global treaty to stop developing a technology that is currently the US's only clear economic lead over the rest of the world.

If you attempted a pause, I think it wouldn't work very well and it would rupture and leave the world in a worse place: Some AI research is already happening in a defence context. This is easy to ignore while defence isn't the frontier.... (read more)

3Wyatt S
The Trump-Vance administration's support base is suspicious of academia, and has been willing to defund scientific research of the grounds of it being too left-wing. There is a schism emerging between multiple factions of the right-wing, the right-wingers that are more tech-oriented and the ones that are nation/race-oriented (the H1B visa argument being an example). This could lead to a decrease in support for AI in the future. Another possibility is that the United States could lose global relevance due to economic and social pressures from the outside world, and from organizational mismanagement and unrest from within. Then the AI industry could move to the UK/EU, turning the main players in AI to the UK/EU and China.
7RHollerith
I would be overjoyed if all AI research were driven underground! The main source of danger is the fact that there are thousands of AI researchers, most of whom are free to communicate and collaborate with each other. Lone researchers or small underground cells of researcher who cannot publish their results would be vastly less dangerous than the current AI research community even if there are many lone researchers and many small underground teams. And if we could make it illegal for these underground teams to generate revenue by selling AI-based services or to raise money from investors, that would bring me great joy, too. Research can be modeled as a series of breakthroughs such that it is basically impossible to make breakthrough N before knowing about breakthrough N-1. If the researcher who makes breakthrough N-1 is unable to communicate it to researchers outside of his own small underground cell of researchers, then only that small underground cell or team has a chance at discovering breakthrough N, and research would proceed much more slowly than it does under current conditions. The biggest hope for our survival is the quite likely and realistic hope that many thousands of person-years of intellectual effort that can only be done by the most talented among us remain to be done before anyone can create an AI that could extinct us. We should be making the working conditions of the (misguided) people doing that intellectual labor as difficult and unproductive as possible. We should restrict or cut off the labs' access to revenue, to investment, to "compute" (GPUs), to electricity and to employees. Employees with the skills and knowledge to advance the field are a particularly important resource for the labs; consequently, we should reduce or restrict their number by making it as hard as possible (illegal preferably) to learn, publish, teach or lecture about deep learning. Also, in my assessment, we are not getting much by having access to the AI researchers: w
3mako yass
A relevant FAQ entry: AI development might go underground I think I disagree here: This will change/is only the case for frontier development. I also think we're probably in the hardware overhang. I don't think there is anything inherently difficult to hide about AI, that's likely just a fact about the present iteration of AI. But I'd be very open to more arguments on this. I guess... I'm convinced there's a decent chance that an international treaty would be enforceable and that China and France would sign onto it if the US was interested, but the risk of secret development continuing is high enough for me that it doesn't seem good on net.

I notice they have a Why do you protest section in their FAQ. I hadn't heard of these studies before

Regardless, I still think there's room to make protests cooler and more fun and less alienating, and when I mentioned this to them they seemed very open to it.

Yeah, I'd seen this. The fact that grok was ever consistently saying this kind of thing is evidence, though not proof, that they actually may have a culture of generally not distorting its reasoning, they could have introduced propaganda policies at training time, it seems like they haven't done that, instead they decided to just insert some pretty specific prompts that, I'd guess, were probably going to be temporary.

It's real bad, but it's not bad enough for me to shoot yet.

There is evidence, literal written evidence, of Musk trying to censor Grok from saying bad things about him

I'd like to see this

7Isopropylpod
https://www.theverge.com/news/618109/grok-blocked-elon-musk-trump-misinformation https://www.businessinsider.com/grok-3-censor-musk-trump-misinformation-xai-openai-2025-2?op=1 The explanation that it was done by "a new hire" is a classic and easy scapegoat. It's much more straight forward to believe Musk himself wanted this done, and walked it back when it was clear it was more obvious than intended. 

I wonder if maybe these readers found the story at that time as a result of first being bronies, and I wonder if bronies still think of themselves as a persecuted class.

Answer by mako yass20

IIRC, aisafety.info is primarily maintained by Rob Miles, so should be good: https://aisafety.info/how-can-i-help

Answer by mako yass20

I'm certain that better resources will arrive but I do have a page for people asking this question on my site, the "what should we do" section. I don't think these are particularly great recommendations (I keep changing them) but it has something for everyone.

These are not concepts of utility that I've ever seen anyone explicitly espouse, especially not here, the place to which it was posted.

3cubefox
Hedonic and desire theories are perfectly standard, we had plenty of people talking about them here, including myself. Jeffrey's utility theory is explicitly meant to model (beliefs and) desires. Both are also often discussed in ethics, including over at the EA Forum. Daniel Kahneman has written about hedonic utility. To equate money with utility is a common simplification in many economic contexts, where expected utility is actually calculated, e.g. when talking about bets and gambles. Even though it isn't held to be perfectly accurate. I didn't encounter the reproduction and energy interpretations before, but they do make some sense.
mako yass1-1

The people who think of utility in the way the article is critiquing don't know what utility actually is, presenting a critque of this tangible utility as a critique of utility in general takes the target audience further away from understanding what utility is.

A Utility function is a property of a system rather than a physical thing (like, eg, voltage, or inertia, or entropy). Not being a simple physical substance doesn't make it fictional.

It's extremely non-fictional. A human's utility function encompasses literally everything they care about, ie, everyt... (read more)

Contemplating an argument that free response rarely gets more accurate results for questions like this because listing the most common answers as checkboxes helps respondents to remember all of the answers that're true for of them.

I'd be surprised if LLM use for therapy or sumarization is that low irl, and I'd expect people would've just forgot to mention those usecases. Hope they'll be in the option list this year.

Hmm I wonder if a lot of trends are drastically underestimated because surveyers are getting essentially false statistics from the Other gutter.

2Screwtape
If Other is larger than I expect, I think of that as a reason to try and figure out what the parts of Other are. Amusingly enough for the question, I'm optimistic about solving this by letting people do more free response and having an LLM sift through the responses.

Apparently Anthropic in theory could have released claude 1 before chatgpt came out? https://www.youtube.com/live/esCSpbDPJik?si=gLJ4d5ZSKTxXsRVm&t=335

I think the situation would be very different if they had.

Were OpenAI also, in theory, able to release sooner than they did, though?

2cubefox
Yes, I think they mentioned that GPT-4 finished training in summer, a few months before the launch of ChatGPT (which used a fine-tuned version of GPT-3.5).
4Mateusz Bagiński
Smaller issue but OA did sit on GPT-2 for a few months between publishing the paper and open-sourcing it, apparently due to safety considerations.
mako yass2-5

The assumption that being totally dead/being aerosolised/being decayed vacuum can't be a future experience is unprovable. Panpsychism should be our null hypothesis[1], and there never has and never can be any direct measurement of consciousness that could take us away from the null hypothesis.

Which is to say, I believe it's possible to be dead.

  1. ^

    the negation, that there's something special about humans that makes them eligible to experience, is clearly held up by a conflation of having experiences and reporting experiences and the fact that humans are the o

... (read more)
mako yass*43

I have preferences about how things are after I stop existing. Mostly about other people, who I love, and at times, want there to be more of.

I am not an epicurean, and I am somewhat skeptical of the reality of epicureans.

3cubefox
Exactly. That's also why it's bad for humanity to be replaced by AIs after we die: We don't want it to happen.

It seems like you're assuming a value system where the ratio of positive to negative experience matters but where the ratio of positive to null (dead timelines) experiences doesn't matter. I don't think that's the right way to salvage the human utility function, personally.

2the gears to ascension
I don't think Lucius is claiming we'd be happy about it. Maybe the no anticipated impact carries that implicit claim, I guess.
2cubefox
It's the old argument by Epicurus from his letter to Menoeceus:
mako yass1-2

Okay? I said they're behind in high precision machine tooling, not machine tooling in general. That was the point of the video.

Admittedly, I'm not sure what the significance of this is. To make the fastest missiles I'm sure you'd need the best machine tools, but maybe you don't need the fastest missiles if you can make twice as many. Manufacturing automation is much harder if there's random error in the positions of things, but whether we're dealing with that amount of error, I'm not sure.
I'd guess low grade machine tools also probably require high grade machine tools to make.

mako yass2-1

Fascinating. China has always lagged far behind the rest of the world in high precision machining, and is still a long way behind, they have to buy all of those from other countries. The reasons appear complex.

All of the US and european machine tools that go to china use hardware monitoring and tamperproofing to prevent reverse engineering or misuse. There was a time when US aerospace machine tools reported to the DOC and DOD.

5Alexander Gietelink Oldenziel
I watched the video. It doesnt seem to say that China is behind in machine tooling - rather the opposite: prices are falling, capacity is increasing, new technology is rapidly adopted.

Regarding privacy-preserving AI auditing, I notice this is an area where you really need to have a solution to adversarial robustness, given that the adversary is 1) a nationstate, 2) has complete knowledge of the auditor's training process and probably weights (they couldn't really agree to an inspection deal if they didn't trust the auditors to give accurate reports) 3) knows and controls the data the auditor will be inspecting. 4) Never has to show it to you (if they pass the audit).

Given that you're assuming computers can't practically be secured (thou... (read more)

Mm, scenario where mass unemployment can be framed as a discrete event with a name and a face.

I guess I think it's just as likely there isn't an event, human-run businesses die off, new businesses arise, none of them outwardly emphasise their automation levels, the press can't turn it into a scary story because automation and foreclosures are nothing fundamentally new (only in quantity, but you can't photograph a quantity), the public become complicit by buying their cheaper higher quality goods and services so appetite for public discussion remains low.

3Kaj_Sotala
I think something doesn't need to be fundamentally new for the press to turn it into a scary story, e.g. news reports about crime or environmental devastation being on the rise have scared a lot of people quite a bit. You can't photograph a quantity but you can photograph individuals affected by a thing and make it feel common by repeatedly running stories of different individuals affected.
mako yass1711

I wonder what the crisis will be.

I think it's quite likely that if there is a crisis that leads to beneficial response, it'll be one of these three:

  • An undeployed privately developed system, not yet clearly aligned nor misaligned, either:
    • passes the Humanity's Last Exam benchmark, demonstrating ASI, and the developers go to congress and say "we have a godlike creature here, you can all talk to it if you don't believe us, it's time to act accordingly."
    • Not quite doing that, but demonstrating dangerous capability levels in red-teaming, ie, replication ability,
... (read more)
8davekasten
I think there's at least one missing one, "You wake up one morning and find out that a private equity firm has bought up a company everyone knows the name of, fired 90% of the workers, and says they can replace them with AI."

This is interesting. In general the game does sound like the kind of fun I expect to find in these parts. I'd like to play it. It sounds like it really can be played as a cohabitive game, and maybe it was even initially designed to be played that way?[1], but it looks to me like most people don't understand it this way today. I'm unable to find this manual you quote. I'm coming across multiple reports that victory = winning[2].

Even just introducing the optional concept of victory muddies the exercise by mixing it up with a zero sum one in an ambiguous way.... (read more)

A moral code is invented[1] by a group of people to benefit the group as a whole, it sometimes demands sacrifice from individuals, but a good one usually has the quality that at some point in a person's past, they would have voluntarily signed on with it. Redistribution is a good example. If you have a concave utility function, and if you don't know where you'll end up in life, you should be willing to sign a pledge to later share your resources with less fortunate people who've also signed the pledge, just in case you become one of the less fortunate... (read more)

4Matthew Barnett
I dispute this, since I've argued for the practical benefits of giving AIs legal autonomy, which I think would likely benefit existing humans. Relatedly, I've also talked about how I think hastening the arrival AI could benefit people who currently exist. Indeed, that's one of the best arguments for accelerating AI. The argument is that, by ensuring AI arrives sooner, we can accelerate the pace of medical progress, among other useful technologies. This could ensure that currently-existing old people who would otherwise die without AI will be saved and live a longer and healthier life than the alternative. (Of course, this must be weighed against concerns about AI safety. I am not claiming that there is no tradeoff between AI safety and acceleration. Rather, my point is that, despite the risks, accelerating AI could still be the preferable choice.) However, I do think there is an important distinction here to make between the following groups: 1. The set of all existing humans 2. The human species itself, including all potential genetic descendants of existing humans Insofar as I have loyalty towards a group, I have much more loyalty towards (1) than (2). It's possible you think that I should see myself as belonging to the coalition comprised of (2) rather than (1), but I don't see a strong argument for that position.  To the extent it makes sense to think of morality as arising from game theoretic considerations, there doesn’t appear to be much advantage for me in identifying with the coalition of all potential human descendants (group 2) rather than with the coalition of currently existing humans plus potential future AIs (group 1 + AIs) . If we are willing to extend our coalition to include potential future beings, then I would seem to have even stronger practical reasons to align myself with a coalition that includes future AI systems. This is because future AIs will likely be far more powerful than any potential biological human descendants. I want to cl
7Matthew Barnett
Are you suggesting that I should base my morality on whether I'll be rewarded for adhering to it? That just sounds like selfishness disguised as impersonal ethics. To be clear, I do have some selfish/non-impartial preferences. I care about my own life and happiness, and the happiness of my friends and family. But I also have some altruistic preferences, and my commentary on AI tends to reflect that.
mako yass*22

Jellychip seems like a necessary tutorial game. I sense comedy in the fact that everyone's allowed to keep secrets and intuitively will try to do something with secrecy despite it being totally wrongheaded. Like the only real difficulty of the game is reaching the decision to throw away your secrecy.

Escaping the island is the best outcome for you. Surviving is the second best outcome. Dying is the worst outcome.

You don't mention how good or bad they are relative to each other though :) an agent cannot make decisions under uncertainty without knowing that.
I... (read more)

(Plus a certain degree of mathematician crankery: his page on Google Image Search, and how it disproves AI

I'm starting to wonder if a lot/all of the people who are very cynical about the feasibility of ASI have some crank belief or other like that. Plenty of people have private religion, for instance. And sometimes that religion informs their decisions, but they never tell anyone the real reasons underlying these decisions, because they know they could never justify them. They instead say a load of other stuff they made up to support the decisions that never quite adds up to a coherent position because they're leaving something load-bearing out.

I don't think the intelligence consistently leads to self-annihilation hypothesis is possible. At least a few times it would amount to robust self-preservation.

Well.. I guess I think it boils down to the dark forest hypothesis. The question is whether your volume of space is likely to contain a certain number of berserkers, and the number wouldn't have to be large for them to suppress the whole thing.

I've always felt the logic of berserker extortion doesn't work, but occasionally you'd get a species that just earnestly wants the forest to be dark and isn't very troubled by their own extinction, no extortion logic required. This would be extremely rare, but the question is, how rare.

2avturchin
So there are several possible explanations: * Intelligence can't evolve as there is not enough selection pressure in the universe with near-light-speed travel. * Intelligence self-terminates every time. * Berserkers and dark forest: intelligence is here, but we observe only field animals. Or field animals are designed in a way to increase uncertainty of the observer about possible berserkers. * Observation selection: in the regions of universe where intelligence exists, there are no young civilizations as they are destroyed - or exist but are observed by berserkers. So we can observe only field animal-dominated regions or berserkers' worlds. * Original intelligence has decayed, but field animals are actually robots from some abandoned Disneyland. Or maybe they are paperclips of some non-aligned AI. They are products of civilization decay.

Light speed migrations with no borders means homogeneous ecosystems, which can be very constrained things.

In our ecosystems, we get pockets of experimentation. There are whole islands where the birds were allowed to be impractical aesthetes (indonesia) or flightless blobs (new zealand). In the field-animal world, islands don't exist, pockets of experimentation like this might not occur anywhere in the observable universe.

If general intelligence for field-animals costs a lot, has no immediate advantages (consistently takes say, a thousand years of ornament status before it becomes profitable), then it wouldn't get to arise. Could that be the case?

2avturchin
Good point. Alternatively, maybe any intelligence above, say, IQ 250 self-terminates either because it discovers the meaninglessness of everything or through effective wars and other existential risks. The rigid simplicity of field animals protects them from all this. They are super-effective survivors, like bacteria which have lived everywhere on Earth for billions of years 

We could back-define "ploitation" as "getting shapley-paid".

Yeah. But if you give up on reasoning about/approximating solomonoff, then where do you get your priors? Do you have a better approach?

Buried somewhere in most contemporary bayesians'  is the solomonoff prior (the prior that the most likely observations are those that have short generating machine encodings) Do we have standard symbol for the solomonoff prior? Claude suggests that  is the most common, but is more often used as a distribution function, or perhaps  for Komogorov? (which I like because it can also be thought to stand for "knowledgebase", although really it doesn't represent knowledge, it pretty much represents something prior to knowledge)

5JenniferRM
My current "background I" (maybe not the one from 2017, but one I would tend to deploy here in 2024) includes something like: "Kolmogorov complexity is a cool ideal, but it is formally uncomputable in theory unless you have a halting oracle laying around in your cardboard box in your garage labeled Time Travel Stuff, and Solomonoff Induction is not tractably approximably sampled by extant techniques that aren't just highly skilled MCMC".
mako yass100

I'd just define exploitation to be precisely the opposite of shapley bargaining, situations where a person is not being compensated in proportion to their bargaining power.

This definition encompasses any situation where a person has grievances and it makes sense for them to complain about them and take a stand, or, where striking could reasonably be expected to lead to a stable bargaining equilibrium with higher net utility (not all strikes fall into this category).

This definition also doesn't fully capture the common sense meaning of exploitation, but I don't think a useful concept can.

2mako yass
We could back-define "ploitation" as "getting shapley-paid".

As a consumer I would probably only pay about 250$ for the unitree B2-W wheeled robot dog because my only use for it is that I want to ride it like a skateboard, and I'm not sure it can do even that.

I see two major non-consumer applications: Street to door delivery (it can handle stairs and curbs), and war (it can carry heavy things (eg, a gun) over long distances over uneven terrain)

So, Unitree... do they receive any subsidies?

Okay if send rate gives you a reason to think it's spam. Presumably you can set up a system that lets you invade the messages of new accounts sending large numbers of messages that doesn't require you to cross the bright line of doing raw queries.

Any point that you can sloganize and wave around on a picket sign is not the true point, but that's not because the point is fundamentally inarticulable, it just requires more than one picket sign to locate it. Perhaps ten could do it.

mako yass*63

The human struggle to find purpose is a problem of incidentally very weak integration or dialog between reason and the rest of the brain, and self-delusional but mostly adaptive masking of one's purpose for political positioning. I doubt there's anything fundamentally intractable about it. If we can get the machines to want to carry our purposes, I think they'll figure it out just fine.

Also... you can get philosophical about it, but the reality is, there are happy people, their purpose to them is clear, to create a beautiful life for themselves and their l... (read more)

-1charlieoneill
I really appreciate your perspective on how much of our drive for purpose is bound up in social signalling and the mismatch between our rational minds and the deeper layers of our psyche. It certainly resonates that many of the individuals gathered at NeurIPS (or any elite technical conference) are restless types, perhaps even deliberately so. Still, I find a guarded hope in the very fact that we keep asking these existential questions in the first place—that we haven’t yet fully succumbed to empty routine or robotic pursuit of prestige. The capacity to reflect on "why we’re doing any of this" might be our uniquely human superpower - even if our attempts at answers are messy or incomplete. As AI becomes more intelligent, I’m cautiously optimistic we might engineer systems that help untangle some of our confusion. If these machines "carry our purposes," as you say, maybe they’ll help us refine those purposes, or at least hold up a mirror we can learn from. After all, intelligence by itself doesn’t have to be sterile or destructive; we have an opportunity to shape it into something that catalyses a more integrated, life-affirming perspective for ourselves.
2Vladimir_Nesov
Pursuing some specific "point of it all" can be much more misguided.

Conditions where a collective loss is no worse than an individual loss. A faction who's on the way to losing will be perfectly willing to risk coal extinction, and may even threaten to cross the threshold deliberately to extort other players.

Do people ever talk about dragons and dinosaurs in the same contexts? If so you're creating ambiguities. If not (and I'm having difficulty thinking of any such contexts) then it's not going to create many ambiguities so it's harder to object.

I think I've been calling it "salvaging". To salvage a concept/word allows us to keep using it mostly the same, and to assign familiar and intuitive symbols to our terms, while intensely annoying people with the fact that our definition is different from the normal one and thus constantly creates confusion.

I'm sure it's running through a lot of interpretation, but it has to. He's dealing with people who don't know or aren't open about (unclear which) the consequences of their own policies.

According to wikipedia, the Biefield brown effect was just ionic drift, https://en.wikipedia.org/wiki/Biefeld–Brown_effect#Disputes_surrounding_electrogravity_and_ion_wind

I'm not sure what wikipedia will have to say about charles buhler, if his work goes anywhere, but it'll probably turn out to be more of the same.

Load More