All of mako yass's Comments + Replies

This is interesting. In general the game does sound like the kind of fun I expect to find in these parts. I'd like to play it. It sounds like it really can be played as a cohabitive game, and maybe it was even initially designed to be played that way?[1], but it looks to me like most people don't understand it this way today. I'm unable to find this manual you quote. I'm coming across multiple reports that victory = winning[2].

Even just introducing the optional concept of victory muddies the exercise by mixing it up with a zero sum one in an ambiguous way.... (read more)

A moral code is invented[1] by a group of people to benefit the group as a whole, it sometimes demands sacrifice from individuals, but a good one usually has the quality that at some point in a person's past, they would have voluntarily signed on with it. Redistribution is a good example. If you have a concave utility function, and if you don't know where you'll end up in life, you should be willing to sign a pledge to later share your resources with less fortunate people who've also signed the pledge, just in case you become one of the less fortunate... (read more)

4Matthew Barnett
I dispute this, since I've argued for the practical benefits of giving AIs legal autonomy, which I think would likely benefit existing humans. Relatedly, I've also talked about how I think hastening the arrival AI could benefit people who currently exist. Indeed, that's one of the best arguments for accelerating AI. The argument is that, by ensuring AI arrives sooner, we can accelerate the pace of medical progress, among other useful technologies. This could ensure that currently-existing old people who would otherwise die without AI will be saved and live a longer and healthier life than the alternative. (Of course, this must be weighed against concerns about AI safety. I am not claiming that there is no tradeoff between AI safety and acceleration. Rather, my point is that, despite the risks, accelerating AI could still be the preferable choice.) However, I do think there is an important distinction here to make between the following groups: 1. The set of all existing humans 2. The human species itself, including all potential genetic descendants of existing humans Insofar as I have loyalty towards a group, I have much more loyalty towards (1) than (2). It's possible you think that I should see myself as belonging to the coalition comprised of (2) rather than (1), but I don't see a strong argument for that position.  To the extent it makes sense to think of morality as arising from game theoretic considerations, there doesn’t appear to be much advantage for me in identifying with the coalition of all potential human descendants (group 2) rather than with the coalition of currently existing humans plus potential future AIs (group 1 + AIs) . If we are willing to extend our coalition to include potential future beings, then I would seem to have even stronger practical reasons to align myself with a coalition that includes future AI systems. This is because future AIs will likely be far more powerful than any potential biological human descendants. I want to cl
7Matthew Barnett
Are you suggesting that I should base my morality on whether I'll be rewarded for adhering to it? That just sounds like selfishness disguised as impersonal ethics. To be clear, I do have some selfish/non-impartial preferences. I care about my own life and happiness, and the happiness of my friends and family. But I also have some altruistic preferences, and my commentary on AI tends to reflect that.

Jellychip seems like a necessary tutorial game. I sense comedy in the fact that everyone's allowed to keep secrets and intuitively will try to do something with secrecy despite it being totally wrongheaded. Like the only real difficulty of the game is reaching the decision to throw away your secrecy.

Escaping the island is the best outcome for you. Surviving is the second best outcome. Dying is the worst outcome.

You don't mention how good or bad they are relative to each other though :) an agent cannot make decisions under uncertainty without knowing that.
I... (read more)

(Plus a certain degree of mathematician crankery: his page on Google Image Search, and how it disproves AI

I'm starting to wonder if a lot/all of the people who are very cynical about the feasibility of ASI have some crank belief or other like that. Plenty of people have private religion, for instance. And sometimes that religion informs their decisions, but they never tell anyone the real reasons underlying these decisions, because they know they could never justify them. They instead say a load of other stuff they made up to support the decisions that never quite adds up to a coherent position because they're leaving something load-bearing out.

I don't think the intelligence consistently leads to self-annihilation hypothesis is possible. At least a few times it would amount to robust self-preservation.

Well.. I guess I think it boils down to the dark forest hypothesis. The question is whether your volume of space is likely to contain a certain number of berserkers, and the number wouldn't have to be large for them to suppress the whole thing.

I've always felt the logic of berserker extortion doesn't work, but occasionally you'd get a species that just earnestly wants the forest to be dark and isn't very troubled by their own extinction, no extortion logic required. This would be extremely rare, but the question is, how rare.

2avturchin
So there are several possible explanations: * Intelligence can't evolve as there is not enough selection pressure in the universe with near-light-speed travel. * Intelligence self-terminates every time. * Berserkers and dark forest: intelligence is here, but we observe only field animals. Or field animals are designed in a way to increase uncertainty of the observer about possible berserkers. * Observation selection: in the regions of universe where intelligence exists, there are no young civilizations as they are destroyed - or exist but are observed by berserkers. So we can observe only field animal-dominated regions or berserkers' worlds. * Original intelligence has decayed, but field animals are actually robots from some abandoned Disneyland. Or maybe they are paperclips of some non-aligned AI. They are products of civilization decay.

Light speed migrations with no borders means homogeneous ecosystems, which can be very constrained things.

In our ecosystems, we get pockets of experimentation. There are whole islands where the birds were allowed to be impractical aesthetes (indonesia) or flightless blobs (new zealand). In the field-animal world, islands don't exist, pockets of experimentation like this might not occur anywhere in the observable universe.

If general intelligence for field-animals costs a lot, has no immediate advantages (consistently takes say, a thousand years of ornament status before it becomes profitable), then it wouldn't get to arise. Could that be the case?

2avturchin
Good point. Alternatively, maybe any intelligence above, say, IQ 250 self-terminates either because it discovers the meaninglessness of everything or through effective wars and other existential risks. The rigid simplicity of field animals protects them from all this. They are super-effective survivors, like bacteria which have lived everywhere on Earth for billions of years 

We could back-define "ploitation" as "getting shapley-paid".

Yeah. But if you give up on reasoning about/approximating solomonoff, then where do you get your priors? Do you have a better approach?

Buried somewhere in most contemporary bayesians'  is the solomonoff prior (the prior that the most likely observations are those that have short generating machine encodings) Do we have standard symbol for the solomonoff prior? Claude suggests that  is the most common, but is more often used as a distribution function, or perhaps  for Komogorov? (which I like because it can also be thought to stand for "knowledgebase", although really it doesn't represent knowledge, it pretty much represents something prior to knowledge)

3JenniferRM
My current "background I" (maybe not the one from 2017, but one I would tend to deploy here in 2024) includes something like: "Kolmogorov complexity is a cool ideal, but it is formally uncomputable in theory unless you have a halting oracle laying around in your cardboard box in your garage labeled Time Travel Stuff, and Solomonoff Induction is not tractably approximably sampled by extant techniques that aren't just highly skilled MCMC".

I'd just define exploitation to be precisely the opposite of shapley bargaining, situations where a person is not being compensated in proportion to their bargaining power.

This definition encompasses any situation where a person has grievances and it makes sense for them to complain about them and take a stand, or, where striking could reasonably be expected to lead to a stable bargaining equilibrium with higher net utility (not all strikes fall into this category).

This definition also doesn't fully capture the common sense meaning of exploitation, but I don't think a useful concept can.

2mako yass
We could back-define "ploitation" as "getting shapley-paid".

As a consumer I would probably only pay about 250$ for the unitree B2-W wheeled robot dog because my only use for it is that I want to ride it like a skateboard, and I'm not sure it can do even that.

I see two major non-consumer applications: Street to door delivery (it can handle stairs and curbs), and war (it can carry heavy things (eg, a gun) over long distances over uneven terrain)

So, Unitree... do they receive any subsidies?

Okay if send rate gives you a reason to think it's spam. Presumably you can set up a system that lets you invade the messages of new accounts sending large numbers of messages that doesn't require you to cross the bright line of doing raw queries.

Any point that you can sloganize and wave around on a picket sign is not the true point, but that's not because the point is fundamentally inarticulable, it just requires more than one picket sign to locate it. Perhaps ten could do it.

The human struggle to find purpose is a problem of incidentally very weak integration or dialog between reason and the rest of the brain, and self-delusional but mostly adaptive masking of one's purpose for political positioning. I doubt there's anything fundamentally intractable about it. If we can get the machines to want to carry our purposes, I think they'll figure it out just fine.

Also... you can get philosophical about it, but the reality is, there are happy people, their purpose to them is clear, to create a beautiful life for themselves and their l... (read more)

-1charlieoneill
I really appreciate your perspective on how much of our drive for purpose is bound up in social signalling and the mismatch between our rational minds and the deeper layers of our psyche. It certainly resonates that many of the individuals gathered at NeurIPS (or any elite technical conference) are restless types, perhaps even deliberately so. Still, I find a guarded hope in the very fact that we keep asking these existential questions in the first place—that we haven’t yet fully succumbed to empty routine or robotic pursuit of prestige. The capacity to reflect on "why we’re doing any of this" might be our uniquely human superpower - even if our attempts at answers are messy or incomplete. As AI becomes more intelligent, I’m cautiously optimistic we might engineer systems that help untangle some of our confusion. If these machines "carry our purposes," as you say, maybe they’ll help us refine those purposes, or at least hold up a mirror we can learn from. After all, intelligence by itself doesn’t have to be sterile or destructive; we have an opportunity to shape it into something that catalyses a more integrated, life-affirming perspective for ourselves.
2Vladimir_Nesov
Pursuing some specific "point of it all" can be much more misguided.

Conditions where a collective loss is no worse than an individual loss. A faction who's on the way to losing will be perfectly willing to risk coal extinction, and may even threaten to cross the threshold deliberately to extort other players.

Do people ever talk about dragons and dinosaurs in the same contexts? If so you're creating ambiguities. If not (and I'm having difficulty thinking of any such contexts) then it's not going to create many ambiguities so it's harder to object.

I think I've been calling it "salvaging". To salvage a concept/word allows us to keep using it mostly the same, and to assign familiar and intuitive symbols to our terms, while intensely annoying people with the fact that our definition is different from the normal one and thus constantly creates confusion.

I'm sure it's running through a lot of interpretation, but it has to. He's dealing with people who don't know or aren't open about (unclear which) the consequences of their own policies.

According to wikipedia, the Biefield brown effect was just ionic drift, https://en.wikipedia.org/wiki/Biefeld–Brown_effect#Disputes_surrounding_electrogravity_and_ion_wind

I'm not sure what wikipedia will have to say about charles buhler, if his work goes anywhere, but it'll probably turn out to be more of the same.

I just wish I knew how to make this scalable (like, how do you do this on the internet?) or work even when you don't know the example person that well. If you have ideas, let me know!

Immediate thoughts (not actionable) VR socialisation and vibe-recognising AIs (models trained to predict conversation duration and recurring meetings) (But VR wont be good enough for socialisation until like 2027). VR because easier to persistently record, though apple has made great efforts to set precedents that will make it difficult, especially if you want to use eye track... (read more)

Wow. Marc Andreeson says he had meetings at DC where he was told to stop raising AI startups because it was going to be closed up in a similar way to defence tech, a small number of organisations with close government ties. He said to them, 'you can't restrict access to math, it's already out there', and he says they said "during the cold war we classified entire areas of physics, and took them out of the research community, and entire branches of physics basically went dark and didn't proceed, and if we decide we need to, we're going to do the same thing ... (read more)

4ChristianKl
This basically sounds like there are people in DC who listen to the AI safety community and told Andreesen that they plan to follow at least some demands of the AI safety folks. OpenAI likely lobbied for it. The military people who know that some physics was classified likely don't know the exact physics that were classified. While I would like more information I would not take this as evidence for much.

He also said interpretability has been solved, so he's not the most calibrated when it comes to truthseeking. Similarly, his story here could be wildly exaggerated and not the full truth.

2mako yass
According to wikipedia, the Biefield brown effect was just ionic drift, https://en.wikipedia.org/wiki/Biefeld–Brown_effect#Disputes_surrounding_electrogravity_and_ion_wind I'm not sure what wikipedia will have to say about charles buhler, if his work goes anywhere, but it'll probably turn out to be more of the same.

All novel information:

The medical examiner’s office determined the manner of death to be suicide and police officials this week said there is “currently, no evidence of foul play.”

Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT

The Mercury News [the writers of this article] and seven sister news outlets are among several newspapers, including the New York Times, to sue OpenAI in the past year.

The practice, he told the Times, ran afoul of the country’s “fair use” laws gover

... (read more)
1[comment deleted]

I found that I lost track of the flow in the bullet points.

I'm aware that that's quite normal, I do it sometimes too, I also doubt it's an innate limit, and I think to some extent this is a playful attempt to make people more aware of it. It would be really cool if people could become better at remembering the context of what they're reading. Context-collapse is like, the main problem in online dialog today.

I guess game designers never stop generating challenges that they think will be fun, even when writing. Sometimes a challenge is frustrating, and somet... (read more)

Overall Cohabitive Games so Far sprawls a bit in a couple of places, particularly where bullet points create an unordered list.

I don't think that's a good criticism, those sections are well labelled, the reader is able to skip them if they're not going to be interested in the contents. In contrast, your article lacks that kind of structure, meandering for 11 paragraphs defining concepts that basically everyone already has installed before dropping the definition of cohabitive game in a paragraph that looks just like any of the others. I'd prefer if you'd o... (read more)

4Screwtape
This is an excellent point and I've added a summary at the start, plus some headers. Thank you! I want to take a moment and note that I'm currently approaching this cooperatively. (Yes, ironic given the subject.) I want the idea of cohabitive games to be in the LessWrong lexicon, I think you also want this, those are the articles we have the chance to put in a higher profile Best Of list, so anything that strengthens either is good.  Plausible this is a stylistic thing and you should feel free to ignore me. I found that I lost track of the flow in the bullet points. For a specific example, the area that starts "Instead of P1's omniscient contract enforcement system..." has a mix of long and short bullets that go like this- * Instead of P1's omniscient contract enforcement system... * Let us build a strand-type board game... * I've heard it suggested that if we got world leaders... * But I'll make an attempt... * If you initially score for forests... * If you want your friends to be happy... * Taken to an extreme... * Give some players a binary... - and when I get to "Give some players a binary..." I've sort of lost track of which level it's on and what thought it's continuing from, in part because "I've heard it suggested..." is long enough to take up most of the screen on my laptop.  Now they aren't :) This is a case where I think the review's sort of caught the development process in amber. Release: Optimal Weave (P1) has the clean game links up front and easy to find; it's the answer to my second part basically. I am still a little worried about those links going dead sometime down the line, though I also think it's quite reasonable to want to keep a prototype where it's easier to update for you and in a format that's best for the standalone game.

I think there's a decent chance this post inspires someone to develop methods for honing a highly neglected facet of collective rationality. The methods might not end up being a game. Games are exercises but most practical learning exercises aren't as intuitively engaging or strategically deep as a game. I think the article holds value regardless just for having pointed out that there is this important, neglected skill.

Despite LW's interest in practical rationality and community thereof, I don't think there's been any discussion of this social skill of ack... (read more)

Do you have similar concerns about humanoid robotics, then?

5Yair Halberstadt
I would have concerns about suitably generic, flexible and sensitive humanoid robots, yes.

At least half of that reluctance is due to concerns about how nanotech will affect the risks associated with AI. Having powerful nanotech around when AI becomes more competent than humans will make it somewhat easier for AIs to take control of the world.

Doesn't progress in nanotech now empower humans far more than it empowers ASI, which was already going to figure it out without us?

Broadly, any increase in human industrial capacity pre-ASI hardens the world against ASI and brings us closer to having a bargaining position when it arrives. EG, once we have the capacity to put cheap genomic pathogen screeners everywhere → harder for it to infect us with anything novel without getting caught.

4PeterMcCluskey
My guess is that ASI will be faster to adapt to novel weapons and military strategies. Nanotech is likely to speed up the rate at which new weapons are designed and fabricated. Imagine a world in which a rogue AI can replicate a billion drones, of a somewhat novel design, in a week or so. Existing human institutions aren't likely to adapt fast enough to react competently to that.
7Yair Halberstadt
One thing to consider is how hard an AI needs to work to break out of human dependence. There's no point destroying humanity if that then leaves you with noone to man the power stations that keep you alive. If limited nanofactories exist it's much easier to bootstrap them into whatever you want, than it is those nanofactories don't exist, and robotics haven't developed enough for you to create one without the human touch.

Indicating them as a suspect when the leak is discovered.

Generally the set of people who actually read posts worthy of being marked is in a sense small, people know each other. If you had a process for distributing the work, it would be possible to figure out who's probably doing it.

It would take a lot of energy, but it's energy that probably should be cultivated anyway, the work of knowing each other and staying aligned.

3dirk
Of course this would shrink the suspect pool, but catching the leaker more easily after the fact is very different from the system making it difficult to leak things. Under the proposed system, it would be very easy to leak things.

You can't see the post body without declaring intent to read.

1dirk
But someone who declared intent to read could simply take a picture and send it to any number of people who hadn't declared intent.

I don't think the part that talks can be called the shadow. If you mean you think I lack introspective access to the intuition driving those words, come out and say it, and then we'll see if that's true. If you mean that this mask is extroardinarily shadowish in vibe for confessing to things that masks usually flee, yes, probably, I'm fairly sure that's a necessity for alignment.

Intended for use in vacuum. I guess if it's more of a cylinder than a ring this wouldn't always be faster than an elevator system though.

I guess since it sounds like they're going to be about a km long and 20 stories deep there'll be enough room for a nice running track with minimal upspin/downspin sections.

4Ben
If this was the setup I would bet on "hard man" fitness people swearing that running with the spin to run in a little more than earth normal gravity was great for building strength and endurance and some doctor somewhere would be warning people that the fad may not be good for your long term health.

Relatedly, iirc, this effect would be more noticeable in smaller spinners than in larger ones? Which is one reason people might disprefer smaller ones. Would it be a significant difference? I'm not sure, but if so, jogging would be a bit difficult, either it would quickly become too easy (and then dangerous, once the levitation kicks in) when you're running down-spin, or it would become exhausting when you're running up-spin.

A space where people can't (or wont) jog isn't ideal for human health.

4DaemonicSigil
Running parallel to the spin axis would be fine, though.

issue: material transport

You can become weightless in a ring station by running really fast against the spin of the ring.

More practically, by climbing down and out into a despinner on the side of the ring. After being "launched" from the despinner, you would find yourself hovering stationary next to the ring. The torque exerted on the ring by the despinner will be recovered when you enter a respinner on whichever part of the ring you want to reenter.

2bhauth
Air resistance. That is, however, basically the system I proposed near the end, for use near the center of a cylinder where speeds would be low.
3mako yass
Relatedly, iirc, this effect would be more noticeable in smaller spinners than in larger ones? Which is one reason people might disprefer smaller ones. Would it be a significant difference? I'm not sure, but if so, jogging would be a bit difficult, either it would quickly become too easy (and then dangerous, once the levitation kicks in) when you're running down-spin, or it would become exhausting when you're running up-spin. A space where people can't (or wont) jog isn't ideal for human health.

In my disambiguations of the really mysterious aspect of consciousness (indexical prior), I haven't found any support for a concept of continuity. (you could say that continuity over time is likely given that causal entanglement seems to have something to do with the domain of the indexical prior, but I'm not sure we really have a reason to think we can ever observe anything about the indexical prior)

It's just part of the human survival drive, it has very little to do with the metaphysics of consciousness. To understand the extent to which humans really ca... (read more)

Disidentifying the consciousness from the body/shadow/subconscious it belongs to and is responsible for coordinating and speaking for, like many of the things some meditators do, wouldn't be received well by the shadow, and I'd expect it to result in decreased introspective access and control. So, psychonauts be warned.

1cube_flipper
This sounds like a shadow talking. I think it's perfectly viable to align the two.

Huh but some loss of measure would be inevitable, wouldn't it? Given that your outgoing glyph total is going to be bigger than your incoming glyph total, since however many glyphs you summon, some of the non-glyph population are going to whittle and add to the outgoing glyphs.

I'm remembering more. I think a lot of it was about avoiding "arbitrary reinstantiation", this idea that when a person dies, their consciousness continues wherever that same pattern still counts as "alive", and usually those are terrible places. Boltzmann brains for instance. This might be part of the reason I don't care about patternist continuity. Seems like a lost cause. I'll just die normally thank you.

We call this one "Korby".

a cluster of 7 circles that looks vaguely like a human

Korby is going to be a common choice for humans, but most glyphists wont commit to any specific glyph until we have a good estimate of the multiversal frequency of humanoids relative to other body forms. I don't totally remember why, but glyphists try to avoid "congestion", where the distribution of glyphs going out of dying universes differs from the distribution of glyphs being guessed and summoned on the other side by young universes. I think this was considered to introduce some inefficiencies that meant that some experiential ... (read more)

8cube_flipper
I would suggest looking at things like three-dimensional space groups and their colourings as candidate glyphs, given they seem to be strong attractors in high-energy DMT states.
3Thane Ruthenis
Yeah, I don't know that this glyphisation process would give us what we actually want. "Consciousness" is a confused term. Taking on a more executable angle, we presumably value some specific kinds of systems/algorithms corresponding to conscious human minds. We especially value various additional features of these algorithms, such as specific personality traits, memories, et cetera. A system that has the features of a specific human being would presumably be valued extremely highly by that same human being. A system that has fewer of those features would be valued increasingly less (in lockstep with how unlike "you" it becomes), until it's only as valuable as e. g. a randomly chosen human/sentient being. So if you need to mold yourself into a shape where some or all of the features which you use to define yourself are absent, each loss is still a loss, even if it happens continuously/gradually. So from a global perspective, it's not much different than acausal aliens resurrecting Schelling-point Glyph Beings without you having warped yourself into a Glyph Being over time. If you value systems that are like Glyph Beings, their creation somewhere in another universe is still positive by your values. If you don't, if you only value human-like systems, then someone creating Glyph Being bring no joy. Whether you or your friends warped yourself into a Glyph Being in the process doesn't matter.
4mako yass
Huh but some loss of measure would be inevitable, wouldn't it? Given that your outgoing glyph total is going to be bigger than your incoming glyph total, since however many glyphs you summon, some of the non-glyph population are going to whittle and add to the outgoing glyphs. I'm remembering more. I think a lot of it was about avoiding "arbitrary reinstantiation", this idea that when a person dies, their consciousness continues wherever that same pattern still counts as "alive", and usually those are terrible places. Boltzmann brains for instance. This might be part of the reason I don't care about patternist continuity. Seems like a lost cause. I'll just die normally thank you.

Yes. Some of my people have a practice where, as the heat death approaches, we will whittle ourselves down into what we call Glyph Beings, archetypal beings who are so simple that there's a closed set of them that will be schelling-inferred by all sorts of civilisations across all sorts of universes, so that they exist as indistinguishable experiences of being at a high rate everywhere.
Correspondingly, as soon as we have enough resources to spare, we will create lots and lots of Glyph Beings and then let them grow into full people and participate in our so... (read more)

5avturchin
Maybe that's why people meditate – they enter a simple state of mind that emerges everywhere.
9mako yass
We call this one "Korby". Korby is going to be a common choice for humans, but most glyphists wont commit to any specific glyph until we have a good estimate of the multiversal frequency of humanoids relative to other body forms. I don't totally remember why, but glyphists try to avoid "congestion", where the distribution of glyphs going out of dying universes differs from the distribution of glyphs being guessed and summoned on the other side by young universes. I think this was considered to introduce some inefficiencies that meant that some experiential chains would have to be getting lost in the jump? (But yeah, personally, I think this is all a result of a kind of precious view about experiential continuity that I don't share. I don't really believe in continuity of consciousness. Or maybe it's just that I don't have the same kind of self-preservation goals that a lot of people have.)

Listened to the Undark. I'll at least say I don't think anything went wrong, though I don't feel like there was substantial engagement. I hope further conversations do happen, I hope you'll be able to get a bit more personal and talk about reasoning styles instead of trying to speak on the object-level about an inherently abstract topic, and I hope the guy's paper ends up being worth posting about.

3Daniel Kokotajlo
Thanks!

What makes a discussion heavy? What requires that a conversation be conducted in a way that makes it heavy?

I feel like for a lot of people it just never has to be, but I'm pretty sure most people have triggers even if they're not aware of it and it would help if we knew what sets this off so that we can root them out.

1TrudosKudos
Fair question - I guess that a certain discussion doesn't necessarily have to be "heavy" but I believe that humans are far less skilled at communication, especially in social settings, for the majority of these interactions to not flare up some level of insecurity, human bias, or offend someone's held beliefs.  I personally would say that I'm quite good at navigating a social context while also being able to broach traditionally taboo'd topics, but I do not think this I represent the norm. The general heuristic of avoiding potentially controversial topics has served me well in most social settings like this, where I believe the purpose is for social connection on a lighter, surface level.  As I replied to a previous comment, I think I'd append my original comment that if OP was advocating for banning normal parties in favor of this format, I'd be against that. I suppose the idea I'm trying to communicate is that it's important to know your setting and the social context before engaging in these types of behaviors, and that this habit will serve you far better in facilitating social connection. 

You acknowledge the bug, but don't fully explain how to avoid it by putting EVs before Ps, so I'll elaborate slightly on that:

This way, they [the simulators] can influence the predictions of entities like me in base Universes

This is the part where we can escape the problem as long as our oracle's goal is to give accurate answers to its makers in the base universe, rather than to give accurate probabilities wherever it is. Design it correctly, and it will be indifferent to its performance in simulations and wont regard them.

Don't make pure oracles, though. ... (read more)

Hmm. I think the core thing is transparency. So if it cultivates human network intelligence, but that intelligence is opaque to the user, algorithm. Algorithms can have both machine and egregoric components.

In my understanding of english, when people say algorithm about social media systems, it doesn't encompass very simple, transparent ones. It would be like calling a rock a spirit.

Maybe we should call those recommenders?

2jefftk
Interesting! Would the original EdgeRank be an algorithm, or is it too simple?

For a while I just stuck to that, but eventually it occurred to me that the rules of following mode favor whoever tweets the most, which is a similar social problem as when meetups end up favoring whoever talks the loudest and interrupts the most, and so I came to really prefer bsky's "Quiet Posters" mode.

Markets put bsky exceeding twitter at 44%, 4x higher than mastodon.
My P would be around 80%. I don't think most people (who use social media much in the first place) are proud to be on twitter. The algorithm has been horrific for a while and bsky at least offers algorithmic choice (but only one feed right now is a sophisticated algorithm, and though that algorithm isn't impressive, it at least isn't repellent)

For me, I decided I had to move over (@makoConstruct) when twitter blocked links to rival systems, which included substack. They seem to have made th... (read more)

3cubefox
After Musk took over, they implemented a mode which doesn't use an algorithm on the timeline at all. It's the "following" tab.

judo flip the situation like he did with the OpenAI board saga, and somehow magically end up replacing Musk or Trump in the upcoming administration...

If Trump dies, Vance is in charge, and he's previously espoused bland eaccism.

I keep thinking: Everything depends on whether Elon and JD can be friends.

5Zack_M_Davis
I don't think Vance is e/acc. He has said positive things about open source, but consider that the context was specifically about censorship and political bias in contemporary LLMs (bolding mine): The words I've bolded indicate that Vance is at least peripherally aware that the "tech people [...] complaining about safety" are a different constituency than the "DEI bullshit" he deplores. If future developments or rhetorical innovations persuade him that extinction risk is a serious concern, it seems likely that he'd be on board with "bipartisan efforts to regulate for safety."

So there was an explicit emphasis on alignment to the individual (rather than alignment to society, or the aggregate sum of wills). Concerning. The approach of just giving every human an exclusively loyal servant doesn't necessarily lead to good collective outcomes, it can result in coordination problems (example: naive implementations of cognitive privacy that allow sadists to conduct torture simulations without having to compensate the anti-sadist human majority) and it leaves open the possibility for power concentration to immediately return.

Even if you... (read more)

3Haiku
Not building a superintelligence at all is best. This whole exchange started with Sam Altman apparently failing to notice that governments exist and can break markets (and scientists) out of negative-sum games.
Load More