窪蹋勛圖

Skip to main content

To 'democratize' AI, make it work more like a human brain

Finger touches a smartphone, with the screen reading: "Get more out of Gemini"

Credit: Adobe Stock

Since the launch of ChatGPT in 2022, AI platforms based on a computer science approach called deep learning have spread to every corner of societytheyre in your emails, on recipe sites and in social media posts from politicians.

That popularity, however, has also brought an unexpected twist, said Alvaro Velasquez, assistant professor in the Department of Computer Science at 窪蹋勛圖: The smarter AI gets, the less accessible it becomes.

According to one estimate, Google spent nearly $190 million training its latest chatbot, known as Gemini. That price tag doesnt include the computer chips, labor and maintenance to keep Gemini running 24/7. AI platforms also come with a hefty environmental toll. Around the world, AI data centers produce nearly 4% of total greenhouse gas emissions.

These factors are putting AI out of reach of all but the largest corporations, Velasquez said

Historically, there was a much more level playing field in AI, he said. Now, these models are so expensive that you have to be a big tech company to get into the industry.

In a paper , he and his colleagues say that an approach known as neurosymbolic AI could help to democratize the field.

Embraced by a growing number of computer scientists, neurosymbolic AI seeks to mimic some of the complex and (occasionally) logical ways that humans think.

The strategy has been around in some form or another since the 1980s. But the new paper suggests that neurosymbolic AI could help to shrink the size, and cost, of AI platforms thousands of times overputting these tools within the grasp of a lot more people.

Biology has shown us that efficient learning is possible, said Velasquez, who until recently served as a program manager for the U.S. Defense Advanced Research Projects Agency (DARPA). Humans dont need the equivalent of hundreds of millions of dollars of computing power to learn.

Alvaro Velasquez headshot

Alvaro Velasquez

Dogs and cats

To understand how neurosymbolic AI works, it first helps to know how engineers build AI models like ChatGPT or Geminiwhich rely on a computer architecture known as a neural network.

In short, you need a ton of data.

Velasquez gives a basic example of an AI platform that can tell the difference between dogs and cats. If you want to build such a model, you first have to train it by giving it millions of photos of dogs and cats. Over time, your system may be able to label a brand-new photo, say of a Weimaraner wearing a bow tie. It doesnt know what a dog or a cat is, but it can learn the patterns behind what those animals look like.

The approach can be really effective, Velasquez said, but it also has major limitations.

If you undertrain your model, the neural network is going to get stuck, he said. The na簿ve solution is you just keep throwing more and more data and computing power at it until, eventually, it gets out of it.

He and his colleagues think that neurosymbolic AI could get around those hurdles.

Heres how: You still train your model on data, but you also program it with symbolic knowledge, or some of the fundamental rules that govern our world. That might include a detailed description of the anatomy of mammals, the laws of thermodynamics or the logic behind effective human rhetoric. Theoretically, if your AI has a firm grounding in logic and reasoning, it will learn faster and with a lot fewer data.

Not found in nature

One place that could work really well is in the realm of biology, Velasquez said.

Say you want to design an AI model that could discover a brand new kind of cancer drug. Deep learning models would likely struggle to do thatin large part because programmers could only train those models using datasets of molecules that already exist in nature.泭
泭泭泭泭
Now, we want that AI to discover a highly novel biologysomething that doesnt exist in nature, Velasquez said. That AI model is not going to produce that novel molecule because its well outside the distribution of data it was trained on.

But, using a neurosymbolic approach, programmers could build an AI that grasps the laws of chemistry and physics. It could then draw on those laws to, in a way, imagine what a new kind of cancer medication might look like.

The idea sounds simple, but in practice, its devilishly hard to do. In part, thats because logical rules and neural networks run on completely different computer architectures. Getting the two to talk to each other isnt easy.

Despite the challenges, Velasquez envisions a future where AI isnt something that only tech behemoths can afford.

Wed like to return to the way AI used to bewhere anyone could contribute to the state of the art and not have to spend hundreds of millions of dollars, he said.


Co-authors of the new paper include Neel Bhatt, Ufuk Topcu and Zhangyang Wang at the University of Texas at Austin; Katia Sycara, Simon Stepputtis at Carnegie Mellon University; Sandeep Neema at Vanderbilt University; and Gautam Vallabha at Johns Hopkins University.