A moral code is invented[1] by a group of people to benefit the group as a whole, it sometimes demands sacrifice from individuals, but a good one usually has the quality that at some point in a person's past, they would have voluntarily signed on with it. Redistribution is a good example. If you have a concave utility function, and if you don't know where you'll end up in life, you should be willing to sign a pledge to later share your resources with less fortunate people who've also signed the pledge, just in case you become one of the less fortunate...
Jellychip seems like a necessary tutorial game. I sense comedy in the fact that everyone's allowed to keep secrets and intuitively will try to do something with secrecy despite it being totally wrongheaded. Like the only real difficulty of the game is reaching the decision to throw away your secrecy.
Escaping the island is the best outcome for you. Surviving is the second best outcome. Dying is the worst outcome.
You don't mention how good or bad they are relative to each other though :) an agent cannot make decisions under uncertainty without knowing that.
I...
(Plus a certain degree of mathematician crankery: his page on Google Image Search, and how it disproves AI
I'm starting to wonder if a lot/all of the people who are very cynical about the feasibility of ASI have some crank belief or other like that. Plenty of people have private religion, for instance. And sometimes that religion informs their decisions, but they never tell anyone the real reasons underlying these decisions, because they know they could never justify them. They instead say a load of other stuff they made up to support the decisions that never quite adds up to a coherent position because they're leaving something load-bearing out.
I don't think the intelligence consistently leads to self-annihilation hypothesis is possible. At least a few times it would amount to robust self-preservation.
Well.. I guess I think it boils down to the dark forest hypothesis. The question is whether your volume of space is likely to contain a certain number of berserkers, and the number wouldn't have to be large for them to suppress the whole thing.
I've always felt the logic of berserker extortion doesn't work, but occasionally you'd get a species that just earnestly wants the forest to be dark and isn't very troubled by their own extinction, no extortion logic required. This would be extremely rare, but the question is, how rare.
Light speed migrations with no borders means homogeneous ecosystems, which can be very constrained things.
In our ecosystems, we get pockets of experimentation. There are whole islands where the birds were allowed to be impractical aesthetes (indonesia) or flightless blobs (new zealand). In the field-animal world, islands don't exist, pockets of experimentation like this might not occur anywhere in the observable universe.
If general intelligence for field-animals costs a lot, has no immediate advantages (consistently takes say, a thousand years of ornament status before it becomes profitable), then it wouldn't get to arise. Could that be the case?
Buried somewhere in most contemporary bayesians' is the solomonoff prior (the prior that the most likely observations are those that have short generating machine encodings) Do we have standard symbol for the solomonoff prior? Claude suggests that is the most common, but is more often used as a distribution function, or perhaps for Komogorov? (which I like because it can also be thought to stand for "knowledgebase", although really it doesn't represent knowledge, it pretty much represents something prior to knowledge)
I'd just define exploitation to be precisely the opposite of shapley bargaining, situations where a person is not being compensated in proportion to their bargaining power.
This definition encompasses any situation where a person has grievances and it makes sense for them to complain about them and take a stand, or, where striking could reasonably be expected to lead to a stable bargaining equilibrium with higher net utility (not all strikes fall into this category).
This definition also doesn't fully capture the common sense meaning of exploitation, but I don't think a useful concept can.
As a consumer I would probably only pay about 250$ for the unitree B2-W wheeled robot dog because my only use for it is that I want to ride it like a skateboard, and I'm not sure it can do even that.
I see two major non-consumer applications: Street to door delivery (it can handle stairs and curbs), and war (it can carry heavy things (eg, a gun) over long distances over uneven terrain)
So, Unitree... do they receive any subsidies?
The human struggle to find purpose is a problem of incidentally very weak integration or dialog between reason and the rest of the brain, and self-delusional but mostly adaptive masking of one's purpose for political positioning. I doubt there's anything fundamentally intractable about it. If we can get the machines to want to carry our purposes, I think they'll figure it out just fine.
Also... you can get philosophical about it, but the reality is, there are happy people, their purpose to them is clear, to create a beautiful life for themselves and their l...
I think I've been calling it "salvaging". To salvage a concept/word allows us to keep using it mostly the same, and to assign familiar and intuitive symbols to our terms, while intensely annoying people with the fact that our definition is different from the normal one and thus constantly creates confusion.
According to wikipedia, the Biefield brown effect was just ionic drift, https://en.wikipedia.org/wiki/Biefeld–Brown_effect#Disputes_surrounding_electrogravity_and_ion_wind
I'm not sure what wikipedia will have to say about charles buhler, if his work goes anywhere, but it'll probably turn out to be more of the same.
I just wish I knew how to make this scalable (like, how do you do this on the internet?) or work even when you don't know the example person that well. If you have ideas, let me know!
Immediate thoughts (not actionable) VR socialisation and vibe-recognising AIs (models trained to predict conversation duration and recurring meetings) (But VR wont be good enough for socialisation until like 2027). VR because easier to persistently record, though apple has made great efforts to set precedents that will make it difficult, especially if you want to use eye track...
Wow. Marc Andreeson says he had meetings at DC where he was told to stop raising AI startups because it was going to be closed up in a similar way to defence tech, a small number of organisations with close government ties. He said to them, 'you can't restrict access to math, it's already out there', and he says they said "during the cold war we classified entire areas of physics, and took them out of the research community, and entire branches of physics basically went dark and didn't proceed, and if we decide we need to, we're going to do the same thing ...
All novel information:
...The medical examiner’s office determined the manner of death to be suicide and police officials this week said there is “currently, no evidence of foul play.”
Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT
The Mercury News [the writers of this article] and seven sister news outlets are among several newspapers, including the New York Times, to sue OpenAI in the past year.
The practice, he told the Times, ran afoul of the country’s “fair use” laws gover
I found that I lost track of the flow in the bullet points.
I'm aware that that's quite normal, I do it sometimes too, I also doubt it's an innate limit, and I think to some extent this is a playful attempt to make people more aware of it. It would be really cool if people could become better at remembering the context of what they're reading. Context-collapse is like, the main problem in online dialog today.
I guess game designers never stop generating challenges that they think will be fun, even when writing. Sometimes a challenge is frustrating, and somet...
Overall Cohabitive Games so Far sprawls a bit in a couple of places, particularly where bullet points create an unordered list.
I don't think that's a good criticism, those sections are well labelled, the reader is able to skip them if they're not going to be interested in the contents. In contrast, your article lacks that kind of structure, meandering for 11 paragraphs defining concepts that basically everyone already has installed before dropping the definition of cohabitive game in a paragraph that looks just like any of the others. I'd prefer if you'd o...
I think there's a decent chance this post inspires someone to develop methods for honing a highly neglected facet of collective rationality. The methods might not end up being a game. Games are exercises but most practical learning exercises aren't as intuitively engaging or strategically deep as a game. I think the article holds value regardless just for having pointed out that there is this important, neglected skill.
Despite LW's interest in practical rationality and community thereof, I don't think there's been any discussion of this social skill of ack...
At least half of that reluctance is due to concerns about how nanotech will affect the risks associated with AI. Having powerful nanotech around when AI becomes more competent than humans will make it somewhat easier for AIs to take control of the world.
Doesn't progress in nanotech now empower humans far more than it empowers ASI, which was already going to figure it out without us?
Broadly, any increase in human industrial capacity pre-ASI hardens the world against ASI and brings us closer to having a bargaining position when it arrives. EG, once we have the capacity to put cheap genomic pathogen screeners everywhere → harder for it to infect us with anything novel without getting caught.
Indicating them as a suspect when the leak is discovered.
Generally the set of people who actually read posts worthy of being marked is in a sense small, people know each other. If you had a process for distributing the work, it would be possible to figure out who's probably doing it.
It would take a lot of energy, but it's energy that probably should be cultivated anyway, the work of knowing each other and staying aligned.
I don't think the part that talks can be called the shadow. If you mean you think I lack introspective access to the intuition driving those words, come out and say it, and then we'll see if that's true. If you mean that this mask is extroardinarily shadowish in vibe for confessing to things that masks usually flee, yes, probably, I'm fairly sure that's a necessity for alignment.
Relatedly, iirc, this effect would be more noticeable in smaller spinners than in larger ones? Which is one reason people might disprefer smaller ones. Would it be a significant difference? I'm not sure, but if so, jogging would be a bit difficult, either it would quickly become too easy (and then dangerous, once the levitation kicks in) when you're running down-spin, or it would become exhausting when you're running up-spin.
A space where people can't (or wont) jog isn't ideal for human health.
issue: material transport
You can become weightless in a ring station by running really fast against the spin of the ring.
More practically, by climbing down and out into a despinner on the side of the ring. After being "launched" from the despinner, you would find yourself hovering stationary next to the ring. The torque exerted on the ring by the despinner will be recovered when you enter a respinner on whichever part of the ring you want to reenter.
In my disambiguations of the really mysterious aspect of consciousness (indexical prior), I haven't found any support for a concept of continuity. (you could say that continuity over time is likely given that causal entanglement seems to have something to do with the domain of the indexical prior, but I'm not sure we really have a reason to think we can ever observe anything about the indexical prior)
It's just part of the human survival drive, it has very little to do with the metaphysics of consciousness. To understand the extent to which humans really ca...
Disidentifying the consciousness from the body/shadow/subconscious it belongs to and is responsible for coordinating and speaking for, like many of the things some meditators do, wouldn't be received well by the shadow, and I'd expect it to result in decreased introspective access and control. So, psychonauts be warned.
Huh but some loss of measure would be inevitable, wouldn't it? Given that your outgoing glyph total is going to be bigger than your incoming glyph total, since however many glyphs you summon, some of the non-glyph population are going to whittle and add to the outgoing glyphs.
I'm remembering more. I think a lot of it was about avoiding "arbitrary reinstantiation", this idea that when a person dies, their consciousness continues wherever that same pattern still counts as "alive", and usually those are terrible places. Boltzmann brains for instance. This might be part of the reason I don't care about patternist continuity. Seems like a lost cause. I'll just die normally thank you.
We call this one "Korby".
Korby is going to be a common choice for humans, but most glyphists wont commit to any specific glyph until we have a good estimate of the multiversal frequency of humanoids relative to other body forms. I don't totally remember why, but glyphists try to avoid "congestion", where the distribution of glyphs going out of dying universes differs from the distribution of glyphs being guessed and summoned on the other side by young universes. I think this was considered to introduce some inefficiencies that meant that some experiential ...
Yes. Some of my people have a practice where, as the heat death approaches, we will whittle ourselves down into what we call Glyph Beings, archetypal beings who are so simple that there's a closed set of them that will be schelling-inferred by all sorts of civilisations across all sorts of universes, so that they exist as indistinguishable experiences of being at a high rate everywhere.
Correspondingly, as soon as we have enough resources to spare, we will create lots and lots of Glyph Beings and then let them grow into full people and participate in our so...
Listened to the Undark. I'll at least say I don't think anything went wrong, though I don't feel like there was substantial engagement. I hope further conversations do happen, I hope you'll be able to get a bit more personal and talk about reasoning styles instead of trying to speak on the object-level about an inherently abstract topic, and I hope the guy's paper ends up being worth posting about.
What makes a discussion heavy? What requires that a conversation be conducted in a way that makes it heavy?
I feel like for a lot of people it just never has to be, but I'm pretty sure most people have triggers even if they're not aware of it and it would help if we knew what sets this off so that we can root them out.
You acknowledge the bug, but don't fully explain how to avoid it by putting EVs before Ps, so I'll elaborate slightly on that:
This way, they [the simulators] can influence the predictions of entities like me in base Universes
This is the part where we can escape the problem as long as our oracle's goal is to give accurate answers to its makers in the base universe, rather than to give accurate probabilities wherever it is. Design it correctly, and it will be indifferent to its performance in simulations and wont regard them.
For a while I just stuck to that, but eventually it occurred to me that the rules of following mode favor whoever tweets the most, which is a similar social problem as when meetups end up favoring whoever talks the loudest and interrupts the most, and so I came to really prefer bsky's "Quiet Posters" mode.
Markets put bsky exceeding twitter at 44%, 4x higher than mastodon.
My P would be around 80%. I don't think most people (who use social media much in the first place) are proud to be on twitter. The algorithm has been horrific for a while and bsky at least offers algorithmic choice (but only one feed right now is a sophisticated algorithm, and though that algorithm isn't impressive, it at least isn't repellent)
For me, I decided I had to move over (@makoConstruct) when twitter blocked links to rival systems, which included substack. They seem to have made th...
judo flip the situation like he did with the OpenAI board saga, and somehow magically end up replacing Musk or Trump in the upcoming administration...
If Trump dies, Vance is in charge, and he's previously espoused bland eaccism.
I keep thinking: Everything depends on whether Elon and JD can be friends.
So there was an explicit emphasis on alignment to the individual (rather than alignment to society, or the aggregate sum of wills). Concerning. The approach of just giving every human an exclusively loyal servant doesn't necessarily lead to good collective outcomes, it can result in coordination problems (example: naive implementations of cognitive privacy that allow sadists to conduct torture simulations without having to compensate the anti-sadist human majority) and it leaves open the possibility for power concentration to immediately return.
Even if you...
This is interesting. In general the game does sound like the kind of fun I expect to find in these parts. I'd like to play it. It sounds like it really can be played as a cohabitive game, and maybe it was even initially designed to be played that way?[1], but it looks to me like most people don't understand it this way today. I'm unable to find this manual you quote. I'm coming across multiple reports that victory = winning[2].
Even just introducing the optional concept of victory muddies the exercise by mixing it up with a zero sum one in an ambiguous way.... (read more)