All of samuelshadrach's Comments + Replies

Got it!

I haven't spent a lot of time thinking about this myself. But one suggestion I would recommend:

For any idea you have, also imagine 20 other neighbouring ideas, ideas which are superficially similar but ultimately not the same.

The reason I'm suggesting this exercise is that ideas keep mutating. If you try to popularise any set of ideas, people are going to come up with every possible modification and interpretation of them. And eventually some of those are going to become more popular and others less popular.

For example with "no removing a core aspec... (read more)

I'm unsure what the theory of change associated with your LW post is. If you have a theory of change associated with it that also makes sense to me, my guess is you'd focus a lot more on cultural attitudes and incentives, and a lot less on legality or technical definitions.

The process for getting a certain desirable future is imo likely not going to be that you create the law first and everyone complies it with later when the tech is deployed.

It'll look more like the biotech companies deploy the tech in a certain way, then a bunch of citizens get used to u... (read more)

3TsviBT
Yeah I'm not, like, trying to sneak this in as a law or something. It's a proposed policy principle, i.e. a proposed piece of culture. My main motive here is just to figure out what a good world with germline engineering could/would look like, and a little bit to start promoting that vision as something to care about and work towards. I agree that practical technology will push the issue, but I think it's good to think about how to make the world with this technology good, rather than just deferring that. Besides the first-order thing where you're just supposed to try to make technology end up going well, it's also good to think about the question for cooperative reasons. For one thing, pushing technology ahead without thinking about whether or how it will turn out well is reckless / defecty, and separately it looks reckless / defecty. That would justify people pushing against accelerating the technology, and would give people reason to feel skittish about the area (because it contains people being reckless / defecty). For another thing, having a vision of a good world seems like it ought to be motivating to scientists and technologists.

Forum devs including lesswrong devs can consider implementing an "ACK" button on any comment, indicating I've read a comment. This is distinct from

a) Not replying - other person doesn't know if I've read their comment or not

b) Replying something trivial like "okay thanks" - other person gets a notification though I have nothing of value to say

I already maybe mentioned this in some earlier discussion so maybe it’s not worth rehashing in detail but…

I strongly feel laws are downstream of culture. Instead of thinking which laws are best, it seems worthwhile to me to try thinking of which culture is best. First amendment in US is protected by culture rather than just by laws, if the culture changed then so would the laws. Same here with genomic liberty. Laws can be changed and their enforcement in day to day life can be changed. (Every country has examples of laws that exist on books but don’t get e... (read more)

2TsviBT
I agree that culture is important and that I contribute a very small amount to deciding what culture looks like. What do you think I'm imagining that reality will look different from?

Got it. As of today a common setup is to let the LLM query an embedding database multiple times (or let it do Google searches, which probably has an embedding database as a significant component).

Self-learning seems like a missing piece. Once the LLM gets some content from the embedding database, performs some reasoning and reaches a novel conclusion, there’s no way to preserve this novel conclusion longterm.

When smart humans use Google we also keep updating our own beliefs in response to our searches.

P.S. I chose not to build the whole LLM + embedding sea... (read more)

Cool!

Useful information that you’d still prefer using ChatGPT over this. Is that true even when you’re looking for book recommendations specifically? If so yeah that means I failed at my goal tbh. Just wanna know.

Since Im spending my personal funds I can’t afford to use the best embeddings on this dataset. For example text-embedding-3-large is ~7x more expensive for generating embeddings and is slightly better quality.

The other cost is hosting cost, for which I don’t see major differences between the models. OpenAI gives 1536 float32 dims per 1000 char chu... (read more)

3cubefox
I think in some cases an embedding approach produces better results than either a LLM or a simple keyword search, but I'm not sure how often. For a keyword search you have to know the "relevant" keywords in advance, whereas embeddings are a bit more forgiving. Though not as forgiving as LLMs. Which on the other hand can't give you the sources and they may make things up, especially on information that doesn't occur very often in the source data.

Concepts that are informed by game theory and other formal models

Strongly in favour of this.

There are people in academia doing this type of work, a lot of them are economists by training studying sociology and political science. See for example Freaknomics by Stephen Levitt or Daron Acemoglu who recently won a nobel prize. Search keywords: neo-instutionalism, rational choice theory. There are a lot of political science papers on rational choice theory, I haven't read many of them so I can't give immediate recommendations.

I'd be happy to join you in your... (read more)

AI can do the summaries.

I agree that people behave differently in observed environments.

Thanks this is super helpful! Edited.

usually getting complete information was the hard part of the project

 

Thoughts on Ray Dalio-style perfect surveillance inside the org? Would that have helped? Basically put everyone on video camera and let everyone inside the org access the footage.

 

Disclaimer: I have no personal reason to accelerate or decelerate Anthropic. I'm just curious from an org design perspective.

1PipFoweraker
You'd run into cognitive overhead limits. Manually reviewing other people's conversations can only really happen at 1:1 to 2:1 speeds. Summaries are much more efficient. Plus, people behave very differently in radically observed environments. Panopticons were designed as part of a punishment system for a reason.
1Anders Lindström
By the time you have an AI that can monitor and figure out what you are actually doing (or trying to do) on your screen, you do not need the person. Ain´t worth the hassle to install cameras that will be useless in 12 months time...

Can you send the query? Also can you try typing the query twice into the textbox? I'm using openai text-embedding-3-small, which seems to sometimes work better if you type the query twice. Another thing you can try is retry the query every 30 minutes. I'm cycling subsets of the data every 30 minutes as I can't afford to host the entire data at once.

2cubefox
I think my previous questions were just too hard, it does work okay on simpler questions. Though then another question is whether text embeddings improve over keyword search or just an LLMs. They seem to be some middle ground between Google and ChatGPT. Regarding data subsets: Recently there were some announcements of more efficient embedding models. Though I don't know what the relevant parameters here are vs that OpenAI embedding model.

Thanks for feedback. 

I’ll probably do the title and trim the snippets. 

One way of getting a quote would to be to do LLM inference and generate it from the text chunk. Would this help?

2cubefox
I think not, because in my test the snippet didn't really contain such a quote that would have answered the question directly.

Update: HTTPS issue fixed. Should work now.

booksearch.samuelshadrach.com

Books Search for Researchers

Thanks for your patience. I'd be happy to receive any feedback. Negative feedback especially.

2cubefox
I see you fixed the https issue. I think the resulting text snippets are reasonably related to the input question, though not overly so. Google search often answers questions more directly with quotes (from websites, not from books), though that may be too ambitious to match for a small project. Other than that, the first column could be improved with relevant metadata such as the source title. Perhaps the snippets in the second column could be trimmed to whole sentences if it doesn't impact the snippet length too much. In general, I believe snippets currently do not show line breaks present in the source.

Update: HTTPS should work now

2cubefox
Okay, that works in Firefox if I change it manually. Though the server seems to be configured to automatically redirect to HTTPS. Chrome doesn't let me switch to HTTP.

Search engine for books

http://booksearch.samuelshadrach.com

Aimed at researchers

 

Technical details (you can skip this if you want):

Dataset size: libgen 65 TB, (of which) unique english epubs 6 TB, (of which) plaintext 300 GB, (from which) embeddings 2 TB, (hosted on) 256+32 GB CPU RAM

Did not do LLM inference after embedding search step because human researchers are still smarter than LLMs as of 2025-03. This tool is meant for increasing quality for deep research, not for saving research time.

Main difficulty faced during project - disk throughput is a b... (read more)

1samuelshadrach
Update: HTTPS should work now
4cubefox

Okay!

I'm not universally arguing against all technology. I'm not even saying that an arms race means this tech is not worth pursuing, just be aware you might be starting an arms race.

Intelligence-enhancing technologies (like superintelligent AI, connectome-mapping for whole brain emulation, human genetic engineering for IQ) are worth studying in a separate bracket IMO because a very small differential in intelligence leads to a very large differential in power (offensive and defensive, scientific and business and political, basically every kind of power).

@TsviBT I don't know if you were the one who downvoted my comment, but yeah I don't think you've engaged with the strongest version (steelman?) of my critique. Laws (including laws promoting genomic liberty) don't carry the same weight during a cold war as they do during peacetime. Incentives shape culture, culture shapes laws.

And the incentives change significantly when a technology upsets the fundamental balance of power between the world's superpowers.

3TsviBT
Or maybe you're arguing "don't develop any technology" or "don't develop any powerful technology" because "governments might misuse it". That's something you could reasonably argue, but I think you should just argue that in general if that's what you're saying, so the case is clearer.
3TsviBT
I didn't downvote any of your comments, and I don't see any upthread comments with any downvotes! Anyway, you could steelman your case if you like. It might help if you compared to other technologies, like "We should develop powerful thing X but not superficially similar powerful thing Y because X is much worse given that there are governments", or something.

Superhumans that are actually better than you at making money will eventually be obvious. Yes, there may be some lead time obtainable before everyone understands, but I expect it will only be a few years at maximum.

Yes it’s possible we end up in a world where the US govt is basically competing with its own shadow yet again. US startup builds some tech, it gets copied 6 months later by non-US startup, US startup feels pressure to move faster as a result and deploys next tech, the next tech too gets copied, etc etc. 

I’m not saying this will definitely happen, but there’s a bunch of incentives pushing in this direction. 

I’m glad you’re thinking about it. 

I would still encourage you to forecast what capabilities look like not just as of 2025, but after a trillion dollars of R&D enter this space. Mobilising a trillion dollars for a field of such importance is not difficult, once successful clinical results are out. All your claims about mean and variance, or about whole genome synthesis being possible, will no longer apply afaik. 

I will let you know when I write an article of this type!


In general though, US policy making circles have a long history of applying just enough pressure on other countries so that the frontier of R&D of every emerging field remains in the US. It’s not a coincidence that frontier of quantum computing and genomics and fusion energy and AI and a hundred other technologies all lie in the US. 

Sometimes this does lead to war, US military leaders have afaik started wars over who has nuclear weapons, who has chemical weapons and who has oil. But often th... (read more)

3TsviBT
From my lay perspective w.r.t. international politics, this seems like it would plausibly be good, to be clear. My frontpage says: The US, or at least what the US is supposed to be and isn't impossibly far from, is a place where you could have strong boundaries preventing the government from restricting genomic liberty, while also supporting the development of the tech.

Cool! 

Have you read meditations on moloch?

My view on this is that even when individuals and countries are not under tight “adapt or die” competition constraints such as during wartime or poverty, everyone faces incentive gradients. Free choices aren’t exactly free. For instance I was “free” to not learn software development and pick a lower paying job, but someone from the outside could still predict with high likelihood I was going to learn software anyway. 

5TsviBT
I have read it (long ago). I take the general point that making this technology partially removes a barrier, where previously human influence on children is limited, and afterward there is at least somewhat more influence. E.g. this could lead to: * Sacrificing wellbeing for competitiveness * Social pressure / "soft eugenics" * Competitive selection (where I mentioned the Meditations) One point of comparison is the default. There is a human-evolution that is always happening. Do we like its results? Do we trust it? Another thing to point out is that the barrier is only somewhat eroded. Except for whole genome synthesis, the amount of control that germline engineering is fairly small compared to the total genetic and phenotypic variation in humans. You and I have 5 or 10 million differing alleles between us; GV would have an effect that's comparable to, say, 1000s of alleles. (This doesn't directly make sense for selection, but morally speaking.) In terms of phenotypes, most of the variation would still be in uncontrolled genomic differences and non-genetic differences. Current IQ PGSes explain <20% of the variance in IQ. Now, to some extent I can't have it both ways; either the benefits of GE are enormous because we're controlling traits somewhat, or we aren't controlling traits much and the benefits can't be that big. But think, for example, of shifting the mean of your kid's expected IQ, without much shifting the variance. (For some disease traits you can drive the probability of the disease far down, which is a lot of phenotypic control; but that seems not so bad!)

I think skill can be stolen via cyberhacking + espionage, assuming you are able to prevent them from just hiring ex employees and ex researchers. The meaningful question for me is how many months of lead time can anyone maintain before they get copied by other nuclear armed countries.


Unless you really find a better plan, my first guess is this is going to lead to an international arms race between multiple countries to develop the most intelligent and politically loyal embryos they possibly can, as fast as they possibly can. The race won’t stop until we hi... (read more)

4TsviBT
I wish there was some more grounded analysis of this sort of thing I could read somewhere. E.g., historical comparisons of other things that states have done with a similar motivation. Or e.g. cases where some technology gets used for good and for evil, and then is it net positive? I feel conversations about what states will do with germline engineering just hit a wall immediately because who knows what will happen. I think extreme fear of / antipathy towards eugenics is good in part because it constitutes political will to not have states do this sort of thing--controlling people's reproduction, influencing populations. Accordingly, I advocate for genomic emancipation, which is directly opposed to state eugenics.

Got it. 

On a technical level, I think more speculating is good before we run the experiment, given that these people if born may very well end up the most powerful people in history. Even small differences in the details could lead to very different futures for humanity.

On a non-technical level, it might be worth writing a post about your stance on the morality and politics of this. So we can separate that whole discussion from the technical discussion.

7TsviBT
I don't super agree with this. But also, I'd view that as somewhat of a failure. Part of why I want the technology to be wideley available is that I think that decreases the path-dependence. Lots of diverse GE kids means a more universalist future, hopefully. Yeah. I have several articles I want to write, though IDK which will become high-priority enough. Some thoughts on genomic liberty are here: https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=ZeranH3yDBGWNxZ7h

Yes I’m assuming political elites ambitious enough to build a intracity network of bullet train will also figure out some solutions for this. Land use restrictions are okay if the city is big enough. Assuming 400 km * 400 km city with 200 km/h train, that’s a lot of land area. Even if some of it is used inefficiently, it may not have large effects. I do think allowing free market-ish building for the city makes sense here though, rather than a slow permitting system for each building. This is for speed alone. 

Hmm

So I get that you want to do things with the consent of everyone involved, be it the sperm donor, egg donor, or the people who will actually raise the child. This doesn’t preclude thinking about population-level changes or thinking ahead multi-generational consequences.

Even if people don’t explicitly aim for population changes, these might be the emergent effects. It may be individually rational for each person to find the highest trait sperm donors they can find, even if they haven’t all coordinated with each other to do it. 

More important though,... (read more)

3TsviBT
This is somewhat true, yeah. But it's only somewhat true. E.g. * One can unilaterally make the technology cheaper and more effective. Generally this makes it more widely accessible, which makes it harder for enemies who would want to keep it for themselves to do so. E.g. if inequality were going to be a big resulting problem, you can fix some of it unilaterally in this way. * Some key aspects of the technology will still require large amounts of skill. I'm thinking in particular of polygenic scores. If KJU wants to make an obedience PGS that actually works for the Korean population, he'd have to find a team of geneticists and psychometricians willing to do so. To say it another way, there is a separate cat to be let out of the bag for each trait (roughly speaking) that you might want to select for/against.

I haven't made up my mind on whether I endorse human genetic engg, but I have technical doubts:

1. For simple embryonic selection, shouldn't we consider highest IQ of male embryos rather expected IQ of the embryos?

If I understand correctly, there is a bottleneck on eggs per egg donor, but not as tight a bottleneck on sperm cells per sperm donor. Assume there are 10,000 egg donors with high IQ, 100 eggs per donor mating with 1,000,000 sperm of one sperm donor with high IQ. Out of the 1,000,000 embryos, let's say the highest IQ embryo grows to childbearing ag... (read more)

3TsviBT
There's something really off about the frame of your question. I'm not exactly sure where you're coming from. I'm not trying to direct anyone's reproduction, I'm not trying to influence anything at a population level, and also I'm not really focused on anything about multiple generations.
8TsviBT
Yeah, I don't know if it makes much sense, and haven't thought too much about it. A few points: * I don't know if I actually care too much. Or rather, I think it would be awesome if +9 SD IQ just makes sense somehow, and also we can enable parents to choose that, and also it flows into more generally being insightful. But I think just having tons of people sample from a distribution with a mean of +6 SDs already makes the outlook for humanity significantly better on my view. It's not remotely the case that every such person will be a great scientist or philosopher; people will have different interests, we shouldn't be strongly conscripting GE children, and there are likely several other traits quite important for contributing to defending against humanity's greatest threats (e.g. curiosity, bravery, freethink; attention, determination, drive; wisdom, reflectiveness). * Actually targeting +9 SDs on anything, especially IQ, is potentially quite dangerous and should either be done with great caution after further study, or not at all. See the bullet point "Traits outside the regime of adaptedness". * But if I speculate: * Some genetic variants will be about "sand in the gears" of the brain. It doesn't seem crazy to think that you can get a lot of performance by just removing an exceptionally large amount of this. But IDK how much there actually is; kman might have suggested that this isn't actually much of the genetic variance in IQ. * Some genetic variants will be about "scaling up" (e.g. literally growing a bigger brain, or a more vascularized one, or one with a higher metabolic budget to spend on plasticity, or something like that, IDK). These seem like they plausibly could keep having meaningful effects past the human envelope, but on the other hand could easily hit limits discussed in "Traits outside...". * Some genetic variants will be about, vaguely, budgeting resources between different neural algorithms. These could easily keep having effect

Can you share why?

If I understand correctly, skyscrapers don't scale as well due to shadow. For every additional floor of skyscraper that's built, there's multiple floors worth of ground area on which building another skyscraper is now a bad idea. So a large region with densely packed 4-storey buildings packs more people than the same region but with some 100-storey skyscrapers.

2lsusr
I think we're in agreement that dense 4-story buildings tend to be usually more efficient than skyscrapers. I'm mostly referring to the cities like Paris which are shorter than free market economics would build—and especially cities (and even more, suburbs) of the USA where land use restrictions are even more restrictive.

Thoughts on bullet trains to expand cities?

https://www.lesswrong.com/posts/mrBZh7YG4nmcjAcof/xpostah-s-shortform?commentId=HudMWqBiavjuYJFxY

2lsusr
Bullet trains are nice, but I feel they make more sense for connecting cities. Generally-speaking, the best direction to expand cities is to build upward and downward.

I agree my point is less important if we get ASI by 2030, compared to if we don’t get ASI. 

That being said, the arms race can develop over the timespan of years not decades. 6-year superhumans will prompt people to create the next generation of superhumans, and within 10-15 years we will have children from multiple generations where the younger generation have edits with stronger effect sizes. Once we can see the effects on these multiple generations, people might go at max pace.  

If I understand correctly it is possible to find $300/mo/bedroom accommodation in rural US today, and a large enough city will compress city rents down to rural rents. A govt willing to pursue a plan as interesting as this one may also be able to increase immigrant labour to build the houses and relax housing regulations. US residential rents are artificially high compared to global average. (In some parts of the world, a few steel sheets (4 walls + roof) is sufficient to count as a house, even water and sewage piping in every house is not mandatory as lon... (read more)

I think we have a huge advantage with humans simply because there isn't the same potential for runaway self-improvement. But in the long term (multiple generations), it would be a concern.

 

How do you know you can afford to wait multiple generations? My guess is superhuman 6 year olds demonstrating their capabilities on YouTube is sufficient to start off an international arms race for more superhumans. (Increase number of people and increase capability level of each person.) And once the arms race is started it may never stop until the end state of this self-improvement is hit. 

P.S. Also we don't know the end state of this race. +5 SD humans aren't necessarily the peak, it's possible these humans further do research on more edits.

This is unlikely to be careful controlled experiment and is more likely to be nation states moving at maximum pace to produce more babies so that they control more of the world when a new equilibrium is reached. And we don't know when if ever this equilibrium will be hit.

PSA

Popularising human genetic engineering is also by default going to popularise lots of neighbouring ideas, not just the idea itself. If you are attracting attention to this idea, it may be useful for you to be aware of this.

The example of this that has already played out is popularising "ASI is dangerous" also popularises "ASI is powerful hence we should build it".

Human genetic engineering targetting IQ as proposed by GeneSmith is likely to lead to an arms race between competing individuals and groups (such as nation states).

 - Arms races can destabilise existing power balances such as nuclear MAD

 - Which traits people choose to genetically engineer in offspring may depend on what's good for winning the race rather than what's long-term optimal in any sense.

 - If maintaining lead time against your opponent matters, there are incentives to bribe, persuade or even coerce people to bring genetically edit... (read more)

2Viliam
If you convince your enemies that IQ is a myth, they won't be concerned about your genetically engineered high IQ babies.
8cubefox
Standard objection: Genetic engineering takes a lot of time till it has any effect. A baby doesn't develop into an adult over night. So it will almost certainly not matter relative to the rapid pace of AI development.
1samuelshadrach
P.S. Also we don't know the end state of this race. +5 SD humans aren't necessarily the peak, it's possible these humans further do research on more edits. This is unlikely to be careful controlled experiment and is more likely to be nation states moving at maximum pace to produce more babies so that they control more of the world when a new equilibrium is reached. And we don't know when if ever this equilibrium will be hit.
4samuelshadrach
PSA Popularising human genetic engineering is also by default going to popularise lots of neighbouring ideas, not just the idea itself. If you are attracting attention to this idea, it may be useful for you to be aware of this. The example of this that has already played out is popularising "ASI is dangerous" also popularises "ASI is powerful hence we should build it".

1 is going to take a bunch of guesswork to estimate. Assuming it were possible to migrate to the US and live at $200/mo for example, how many people worldwide will be willing to accept that trade? You can run a survey or small scale experiment at best. 

What can be done is expand cities to the point where no more new residents want to come in. You can expand the city in stages. 

1Purplehermann
Definitely an interesting survey to run. I don't think the US wants to triple the population with immigrants, and $200/month would require a massive subsidy. (Internet says $1557/month average rent in US) How many people would you have to get in your city to justify the progress?  100 Million would only be half an order of magnitude larger than Tokyo, and you're unlikely to get enough people to fill it in the US (at nearly a third of the population,  you'd need to take a lot of population from other cities) How much do you have to subsidize living costs, and how much are you willing to subsidize? 

Thanks!

Your write up was useful to me. 

I don’t think Tor scales in current form because it relies on altruistic donors to provide bandwidth. I agree there may be a way to scale it that doesn’t rely on altruism. 

I agree you’re pointing at an important problem. Namely when there’s a large structure aimed at achieving some task for users, and it deliberately does it poorly, some of our best solutions are to ensure  low cost-of-exit for users and allow for competing alternatives. 

This can be slow and wasteful as millions of people need to b... (read more)

Would you invest your own money in such a project?

If I were a billionaire I might. 

I also have (maybe minor, maybe not minor) differences of opinion with standard EA decision-making procedures of assigning capital across opportunities. I think this is where our crux actually is, not on whether giant cities can be built with reasonable amounts of funding. 

And sorry I won’t be able to discuss that topic in detail further as it’s a different topic and will take a bunch of time and effort.  

1Purplehermann
Our cruxes is whether the amount of investment to build one has a positive expected return on investment, breaking down into 1. If you could populate such a city 2. Whether this is a "try everything regardless of cost" issue, given that a replacent is being developed for other reasons. I suggest focusing on 1, as it's pretty fundamental to your idea and easier to get traction on

I love this post.


1. Another important detail to track is what the leader says in private versus what they say in public. Typically you may want to first acquire data and attempt to trigger these cascades in private and in smaller groups, before you try triggering them across your nation or planet. 

2. I also think the Internet is going to shift these dynamics, by forcing private spheres of life to shrink or even become non-existent, and by increasing the number of events that are in public and therefore have potential to trigger these cascades. 

Fo... (read more)

(edited)

This is probably obvious to you, but you can expand the working memory bottleneck by making lots of notes. You still need to store the "index" of the notes in your working memory though, to be able to get back to relevant ideas later. Making a good index includes compressing the ideas till you get the "core" insights into it.

Some part of what we consider intelligence is basically search and some part of what we consider faster search is basically compression.

Tbh you can also do multi-level indexing, the top-level index (crisp world model of everyth... (read more)

Got it. I understood what you're trying to say. I agree living in cities has some downsides compared to living in smaller towns, and if you could find a way to get the best of both instead it could be better than either.

I mean, I know a bunch of devs who can accurately answer "can state-of-the-art AI do task X, yes or no?" or atleast make progress towards answering it. You could put up a job description with approx salary here on lesswrong or elsewhere, I could forward it to some people.

especially not at once.

It could be built in stages. Like, build a certain number of bullet train stations at a time and wait to see if immigrants + real estate developers + corporations start building the city further, or do the stations end up unused?

I agree there is opportunity cost. It will help if I figure out the approx costs of train networks, water and sewage plumbing etc. 

I agree there are higher risk higher reward opportunities out there, including VR. In my mind this proposal seemed relatively low risk so I figured it’s worth thinking throug... (read more)

1Purplehermann
Lower/Higher risk and reward is the wrong frame. Your proposal is high cost. Building infrastructure is expensive. It may or may not be used, and even if used it may not be worthwhile. R&D for VR is happening regardless, so 0 extra cost or risk. Would you invest your own money into such a project?       "This is demonstrably false. Honestly the very fact that city rents in many 1st world countries are much higher than rural rents proves that if you reduced the rents more people would migrate to the cities." Sure, there is marginal demand for living in cities in general. You could even argue that there is marginal demand to live in bigger vs smaller cities.  This doesn't change the equation: where are you getting one billion residents - all of Africa? There is no demand for a city of that size.

Sorry I didn’t understand your comment at all. Why are 1, 2 and 4 bigger problems in 1 billion population city versus say a 20 million population city?

1ProgramCrafter
I'd maintain that those problems already exist in 20M-people cities and will not necessarily become much worse. However, by increasing city population you bring in more people into the problems, which doesn't seem good.

Have you tried llama3? (Latest open source model, hence no moderation)

It might be worth posting a few sample tasks online so software developers can tell you whether they’re automatable or not. 

1Chris Monteiro
I am sure there are some interesting uses of agented AIs in can configure for automated OSINT but this feels quite large a task given I am bottlenecking more in who to hand the data to rather than it being insufficiency rich. Know any preconfigured agency menageries for something like this?

To name some power upstream factors, I'd say "Increase the social value of growth and maturity"

How to actually do this?

It’s easy to say “I wish XYZ were high status in society”. I’m interested in concrete steps a few individuals like you or me can take. Ultimately all this world building has to translate it decisions and actions taken by you and me and other people listening to us, not a hypothetical member of society. 

I agree you are pointing at real problems mostly. 
 

When I search "web 3.0" the results seem to hint that people understand t

... (read more)
2StartAtTheEnd
Well, we somehow changed smoking from being cool to being a stupid, expensive and unhealthy addiction. I think the method is about the same here. But the steps an individual can take are very limited. In politics, you have millons of people trying to convert other people into their own ideology, so if it was easy for an individual to change the values of society, we'd have extremists all over. Anyway, you'd probably need to start a Youtube channel or something. Combining competence and simplicity, you could make content that most people could understand, and become popular doing that. "Hoe math" comes to mind as an example. Jordan Peterson and other such people are a little more intellectual, but there's also a large amount of people who do not understand them. Plus, if you don't run the account anonymously, you'd take some risks to your reputation proportional to how controversial your message is. That's a shame. Why are they in web3 in the first place, then? The only difference is the design, and from what I've seen, designs which give power to the users rather than some centralized mega-corporation. I think this is due to attack-defense asymmetry. Attackers have to find just one vulnerability, defenders have to stop all attacks. I do however agree that very few people ask these questions. I think Tor would scale no problem if more people used it, but it has the same problem has 8chan and the privacy-focused products and websites have: All the bad people (and those who were banned on most other sites) flock there first, and they create a scary environment or reputation, and that makes normal people not want to go there/use the service. Many privacy-oriented apps have the reputation of being used by criminals and pedophiles. This problem would go away if there was more places where privacy was valued, since the "bad people" density would go down as the thing in question became more popular. But I've noticed that everything gets worse over time. In order to ha

I agree VR might be one-day be able to do this (make online meetings as good as in-person ones). As of 2025, bullet trains are more proven tech than VR. I'd be happy if both were investigated in more depth.

1Purplehermann
A few notes on massive cities: Cities of 10Ms exist, there is always some difficulty in scaling, but scaling 1.5-2 OOMs doesn't seem like it would be impossible to figure out if particularly motivated.    China and other countries have built large cities and then failed to populate them   The max population you wrote (1.6B) is bigger than china, bigger than Africa, similar to both American Continents plus Europe . Which is part of why no one really wants to build something so big, especially not at once.   Everything is opportunity cost, and the question of alternate routes matters alot in deciding to pursue something. Throwing everything and the kitchen sink at something costs a lot of resources.   Given that VR development is currently underway regardless, starting this resource intense project which may be made obsolete by the time it's done is an expected waste of resources. If VR hit a real wall that might change things (though see above). If this giga-city would be expected to 1000x tech progress or something crazy then sure, waste some resources to make extra sure it happens sooner rather than later.   Tl;dr: Probably wouldn't work, there's no demand,   very expensive, VR is being developed and would actually be able to say what you're hoping but even better

Have you tried using AI for any part of your process? (And do you have access to o1?)

5Chris Monteiro
My attempts of creating summaries with ChatGPT violated the content policies last I tried. There is lots of OSINT work to do, but until I have normalised all the ID data out from the message data, I am not comfortable handing it over to OSINT specialists or their AIs.
Load More