3 things large language models need in an era of ‘sentient’ AI hype

3 things large language models need in an era of ‘sentient’ AI hype

All hell broke loose in the AI world after The Washington Post reported last week that a Google engineer thought that LaMDA, one of the company’s large language models (LLM), was sentient.

The news was followed by a frenzy of articles, videos and social media debates over whether current AI systems understand the world as we do, whether AI systems can be conscious, what are the requirements for consciousness, etc.

We are currently in a state where our large language models have become good enough to convince many people — including engineers — that they are on par with natural intelligence. At the same time, they are still bad enough to make dumb mistakes, as these experiments by computer scientist Ernest Davis show.

What makes this concerning is that research and development on LLMs is mostly controlled by large tech companies that are looking to commercialize their technology by integrating it into applications used by hundreds of millions of users. And it is important that these applications remain safe and robust to avoid confusing or harming their users.

Read more