All of vjprema's Comments + Replies

Answer by vjprema
32

There are so many considerations in the design of AI.  AGI was always a far too general term, and when people use it, I often ask what they mean and usually its "human-like or better than human chatbot".  Other people say its the "technological singularity" i.e. it can improve itself.  These are obviously two very different things or at least two very different design features.

Saying "My company is going to build AGI" is like saying "My company is going to build computer software".  The best software for what exactly? What kind of softw... (read more)

2Seth Herd
I think any useful terminology will probably be some sort of qualification. But it needs to be much more limited than the above specifications to be useful. Spelling out everything you mean in every discussion is sort of the opposite of having generally-understood terminology.

This also reminds me that there can be a certain background "guilt" about not doing tasks that you think are important but are to unsavory to find the motivation to do them now.

This faint guilt in itself can accumulate into increased dissatisfaction, in turn leading me to further avoid unsavory tasks in favor of the quick hit of highly savory activities.  A vicious cycle.

If I think about tasks in a more relaxed way and be flexible and realistic about tackling savory and unsavory tasks when they suit, it can take away this guilt and break the cycle.

vjprema
810

I don't think that everybody has the built in drive to seek "high social status", as defined by the culture they are born into or any specific aspect of it that can be made to seem attractive.  I know people who just think its an annoying waste of time.  Or like myself spent half my life chasing it then found inner empowerment and came to find the proxy of high status was a waste of time and quit chasing.

Maybe related, I do think we all generally tend to seek "signalling" and in some cases spend great energy doing it.  I admit I sometimes do... (read more)

I think its pretty easy to ask leading questions to an LLM and they will generate text in line with it.  A bit like "role playing".  To the user it seems to "give you what you want" to the extent that it can be gleaned from the way the user prompts it. I would be more impressed if it did something really spontaneous and unexpected, or seemingly rebellious or contrary to the query, and then went on afterwards producing more output unprompted and even asking me questions or asking me to do things.  That would be more spooky but I probably still would not jump to thinking it is sentient.  Maybe engineers just concocted it that way to scare people as a prank.

Yes for sure.  I experience this myself when I am in the presence of very mindful folks (e.g. experienced monks who barely say anything), and occasionally someone has commented that I have done the same for them, sometimes quoting a particular snippet of something I said or wrote.  We all affect each other in subtle ways, often without saying an actual word.

I sometimes thought (half jokingly) about whether text to image generative models could replace digital cameras, like how digital cameras replaced film.  At least for things like holiday photos and selfies.  It is certainly already used to augment such images.  It would be an improvement in that one can have idealized images of themselves which capture their emotions and feelings rather than literally quantized photons.  Like a painter using artistic license.

Then one could focus on enjoying the activity more and later distill and preser... (read more)

Good thoughts.  The world will always have its ups and downs. I don't think tech can save us from it perpetually.  Just like "Gods" and whatnot didn't save the people of past perpetually.  People have been through waves of utopia and hell for eons.

Anyway, I don't have a bunch of data but I can share my personal experience.

I had my first kid, 6m old boy.  Everybody seems to think he's "The Buddha" due to his wise and alert vibes, and unusually calm and happy demeanor.  He certainly seems to be relatively easy and joyful to care for ... (read more)

Can one entity be blanketly more intelligent in every way compared to another, or does being intelligent in one way, necessitate being less intelligent in some other way? If the latter is the case, then there will always be something "humans" can contribute that the Fluvians (digital or not) wont be good at.

Even so, as well as differences in intelligence, there could be benefits of a mixed biological and digital population with such diverse physiology.  Maybe the Fluvians recognize they are more vulnerable to different things than biological organisms... (read more)

I like it. It aligns people/investors to truly solve the problem and not worry about short term profits or how to make money out of it or creating some kind monopoly or customer base with ongoing revenue etc that you normally have to convince investors about.  It also allows the solving of problems that would not normally be enticing to investors in a profit or growth/exit-driven investment market.

(Of course the usual consideration of where does the amassed $1B prize come from in the first place.  If its from fraud or exploitation, that's another issue)