Here is the full stream.

I haven't seen LessWrong discuss BCI technology all that much, so I'm curious what some of the people here think about the current SOTA, and whether such devices will eventually be able to live up to their lofty promises.

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

MrThink

20

Elon Musk have argued that humans can take in a lot of information through vision (by looking at a picture for one second, you can take in a lot of information). Text/speech however is not very information dense. He argues that therefore since we use keyboards or speech to communicate information outwards, it takes a long time.

One possibility is that AI could help interpreting the data uploaded, and filling in details to make the uploaded information more useful. For example you could "send" an image of something through the Neuralink, an AI would interpret it, fill in the details that are unclear, and then you would have an image, very close to what you imagined, containing several hundred or maybe thousands kilobytes of information.

The neuralink would only need to increase the productivity of an occupation by a few percent, to be worth the investment of 3000-4000 USD that Elon Musk believes the price will drop to.

I think that the AI here is going to have to not just fill in the blanks but convert to a whole new intermediary format. I say this because there are lots of people who despite from the outside appearing normal don't even see images. A less extreme example would be the people who do and don't subvocalise whilst reading - I know that when I'm stuck in the middle of a novel it's basically just a movie playing in my head, there's no conscious spelling out of the words, but for other people there is a narrator. Because of this the larg... (read more)

1MrThink
It does seem like a reasonable analogy that the Neuralink could be like a "sixth sense" or an extra (very complex) muscle.
I find it likely that Neuralink will succeed with increasing the bandwith speed for "uploading" information from the brain, and I think it will do so with the help of AI. For example you could send an image of something through the Neuralink, an AI would interpret it, fill in the details that are unclear, and then you have an image, very close to what you imagined, containing several hundred or maybe thousands kilobytes of information.

I would be very interested to know if self-reported variation in mental imagery will significantly affect the ability to use such a system. Also, how trainable that is as a skill.

Gunnar_Zarncke

-10

I think something like Neuralink is needed to make AGI workable for humans. Otherwise we are just there. Tools, services and assistants are not natural enough.

See my old post here: https://www.lesswrong.com/posts/TbFMq8XkJAYa3EELw/when-does-technological-enhancement-feel-natural-and