LESSWRONG
LW

AI
Frontpage

68

Personal AI Planning

by jefftk
10th Nov 2024
jefftk
3 min read
11

68

AI
Frontpage

68

Personal AI Planning
9sanxiyn
3Jack Withers
6Milan W
13Adam Jermyn
3Milan W
3David Gross
10jefftk
3martinkunev
2Arjun Panickssery
2khafra
1martinkunev
New Comment
11 comments, sorted by
top scoring
Click to highlight new comments since: Today at 5:22 AM
[-]sanxiyn10mo90

I understand many people here are native English speakers, but I am not, and one thing I think about a lot is how much people should spend on learning English. Learning English is a big investment. Will AI advances make language barriers irrelevant? I am very uncertain about this and I would like to hear your opinions.

Reply
[-]Jack Withers6mo30

I have a high (>90%) confidence that language barriers will become practically non-existent in the near future with AI advances. It is already technically possible, with apps or products that are able to translate spoken language in near real-time and LLMs that are: 

 - able to translate the original meaning to a different language increasingly well compared to older translators (such as Google translate which gained a reputation for often having bad translations)

 - are becoming increasingly cheap to run due to technical advances

 - are becoming increasingly popular, which means progressively using AI to translate will not look irresponsible and nerdy, but mainstream and available options and interfaces will increase.

That said, I have moderate (~40%) confidence that actually knowing a language might become a status symbol and be used to signal intelligence (and it has served this role historically, with for example Latin being the international language of the upper class).

Given these factors, I'd recommend most people to invest into English to the point of being able to communicate (and perhaps use LLMs or similar tools to increase fluency with low time investment) and just... Wait for how it actually turns out with AI. Learning new languages has been tied to lower cognitive decline in old age, so its not like it will be a purely bad investment, and it allows you to have more perspectives on the world (as the quote goes; "the limits of my language are the limits of my world")

Reply
[-]Milan W10mo60

Assuming private property as currently legally defined is respected in a transition to a good post-TAI world, I think land (especially in areas with good post-TAI industrial potential) is a pretty good investment. It's the only thing that will keep on being just as scarce. You do have to assume the risk of our future AI(-enabled?) (overlords?) being Georgists, though.

Reply
[-]Adam Jermyn10mo136

Unless we build more land (either in the ocean or in space)?

Reply
[-]Milan W10mo32

You're right. Space is big.

Reply
[-]David Gross10mo31

A lot of the current education system aims to give children skills that they can apply to the job market as it existed 20 years ago or so. I think children would be better-advised to master more general skills that could be applied to a range of possible rapidly changing worlds: character skills like resilience, flexibility, industriousness, rationality, social responsibility, attention, caution, etc.

Come to think of it, such skills probably represent more reliable "investments" for us grown-ups too.

Reply
[-]jefftk10mo100

Can you give examples of curriculum elements that you think are aimed at the world of 20 years ago? The usual criticism I see is that school is barely connected to the needs of the working world.

Reply
[-]martinkunev10mo30

Not exactly a response but some things from my experience. In elementary school in the late 90s we studied caligraphy. In high school (mid 2000s) we studied DOS.

Reply
[-]Arjun Panickssery10mo20

By "calligraphy" do you mean cursive writing?

Reply
[-]khafra10mo20

In the late 80's, I was homeschooled, and studied caligraphy (as well as cursive); but I considered that more of a hobby than preparation for entering the workforce of 1000 years ago. 

I also learned a bit about DOS and BASIC, after being impressed with the fractal-generating program that the carpenter working on our house wrote, and demonstrated on our computer. 

Reply
[-]martinkunev10mo10

They were teaching us how to make handwriting beautiful and we had to exercice. The teacher would look at the notebooks and say stuff like "you see this letter? It's tilted in the wrong direction. Write it again!".

This was a compulsory part of the curriculum.

Reply
Crossposted to the EA Forum. Click to view 5 comments.
Moderation Log
More from jefftk
View more
Curated and popular this week
11Comments

LLMs are getting much more capable, and progress is rapid. I use them in my daily work, and there are many tasks where they're usefully some combination of faster and more capable than I am. I don't see signs of these capability increases stopping or slowing down, and if they do continue I expect the impact on society to start accelerating as they exceed what an increasing fraction of humans can do. I think we could see serious changes in the next 2-5 years.

In my professional life, working on pathogen detection I take this pretty seriously. Advances in AI make it easier for adversaries to design and create pathogens, and so it's important to get a comprehensive detection system in place quickly. Similarly, more powerful AIs are likely to speed up our work in some areas (computational detection) more than others (partnerships) and increase the value of historical data, and I think about this in my planning at work.

In other parts of my life, though, I've basically been ignoring that I think this is likely coming. In deciding to get more solar panels and not get a heat pump I looked at historical returns and utility prices. I book dance gigs a year or more out. I save for retirement. I'm raising my kids in what is essentially preparation for the world of the recent past.

From one direction this doesn't make any sense: why wouldn't I plan for the future I see coming? But from another it's more reasonable: most scenarios where AI becomes extremely capable look either very good or very bad. Outside of my work, I think my choices don't have much impact here: if we all become rich, or dead, my having saved, spent, invested, or parented more presciently won't do much. Instead, in my personal life my decisions have the largest effects in worlds where AI ends up being not that big a deal, perhaps only as transformative as the internet has been.

Still, there are probably areas in our personal lives where it's worth doing something differently? For example:

  • Think hard about career choice: if our kids were a bit older I'd want to be able to give good advice here. How is AI likely to impact the fields they're most interested in? How quickly might this go? What regulatory barriers are there? How might the portions they especially enjoy change as a fraction of the overall work?

  • Maybe either hold off on having kids or have them earlier than otherwise. If we were trying to decide whether to have (another) kid I'd want to think about how much of wanting to have a kid was due to very long term effects (seeing them grow into adulthood, increasing the chance grandchildren, pride in their accomplishments), how I'd feel if children conceived a few years from now had some (embryo selection) or a lot of (genome editing) advantages, how financial constraints might change, what if I never got to be a parent, etc.

  • Postponing medical treatment that trades short-term discomfort for long-term improvement: I'm a bit more willing to tolerate and work around the issues with my wrists and other joints than I would be in a world where I thought medicine was likely to stay on its recent trajectory.

  • Investing money in ways that anticipate this change: I'm generally a pretty strong efficient markets proponent, but I think it's likely that markets are under-responding here outside of the most direct ways (NVDA) to invest in the boom. But I haven't actually done anything here: figuring out which companies I expect to be winners and losers in ways that are not yet priced in is difficult.

  • Avoiding investing money in ways that lock it up even if the ROI is good: I think it's plausible that our installing solar was a mistake and keeping the money invested to retain option value would have been better. I might prefer renting to owning if we didn't already own.

What are other places where people should be weighing the potential impact of near-term transformative AI heavily in their decisions today? Are there places where most of us should be doing the same different thing?