WolframAlpha, ChatGPT, and the Future of AI

We all know that ChatGPT has taken the world by storm. True, it is a major advance of Artificial Intelligence in the area of Natural Language Processing. Many may not know that WolframAlpha, launched in 2009, allowed natural language queries. As a long time user of Wolfram Mathematica, I was pleasantly surprised when the product was launched and experimented quite a bit with it. The product has grown substantially since then and is backed by a huge amount of “curated knowledge”. What I like most about it is the way it presents its answers. Here is a sample:

WolframAlpha Query

WolframAlpha Query-1

It is remarkable that when I introduce brackets to indicate precedence in calculations, it is able to handle that correctly:

Another Query

Slightly Modified Query

The query need not be restricted to Mathematics or even Science; it can handle several domains. Here is a different query:

A General Query

A General Query

How does this differ from ChatGPT? Let us see the answers generated by ChatGPT for the earlier two Math queries:

ChatGPT Math Queries

ChatGPT Math Queries

As you can see, ChatGPT is not able to differentiate between the two queries. While I am not going to compare the two products in detail, it is only fair to point out that even WolframAlpha will fail in some cases. Here is one such example:

WolframAlpha Failed Query

WolframAlpha Failed Query

Let us see how ChatGPT handles this case:

ChatGPT Response

ChatGPT Response

That is the correct answer!

To me, what is interesting and special about WolframAlpha is not the NLP part or its knowledge base, but the fact that the query is internally converted into Expressions and Concepts built into the Wolfram Language and executed at the symbolic level.

In a recent article, Stephen Wolfram has compared ChatGPT and WolframAlpha and gives examples where ChatGPT generated incorrect answers, while WolframAlpha gives the correct answers. He goes on to express his view that ChatGPT could perhaps use WolframAlpha as a “super power” back end to compute the correct responses.

While this is certainly possible and will work in some (or even many) cases, it might not solve the general problem of generating the correct response for an arbitrary input. I believe that a workable and scalable solution lies in adopting a different approach.

The following diagram shows WolframAlpha and ChatGPT as they exist today:

ChatGPT and WolframAlpha Today

ChatGPT and WolframAlpha Today

Each product takes the input and generates the result directly (the fact that WolframAlpha converts this to a symbolic query internally is an implementation detail). This is a “heavyweight” approach where each tries to “do all” and “be all”. Not surprisingly, this leads to failures in some cases. It is impractical to be a “master-of-everything”! And this doesn’t scale easily.

Is there a different approach? I believe that the following scheme provides a better approach:

A Different Model

A Different Model

In this approach, we have a “converter” that converts the free-form query to a formal meaning representation language, and this is then processed by a suitable domain expert to generate the correct response.

What is this “meaning representation language”? From Roger Schank’s Conceptual Dependency Theory to the more recent Abstract Meaning Representation Language (AMR), a lot of interesting research is happening in the area of Meaning Representation. Of course, much more remains to be done.

Here is what I would like to suggest: Just like we have a Java Virtual Machine (JVM), we should devise a Cognitive Virtual Machine (CVM) architecture that mimics human cognition and, as part of that, define the primitive instructions that can be used in the Meaning Representation Language. This will help standardize the MRL. Domain experts can use this MRL to build custom intelligent solutions for their respective domains. I know this is a challenging problem to solve, but if the kind of collective effort (and money) that is being pumped into Machine Learning is also put into this CVM design, it will become a reality in the next decade.

Building an intelligent system eventually becomes a “plug-and-play” system:

Building Intelligent Solutions of the Future

Building Intelligent Solutions of the Future

An intelligent “chat agent” of the future can then be built by suitably integrating these components like this:

Intelligent Chat Agent of the Future

Intelligent Chat Agent of the Future

It might appear to be far-fetched today, but when experts from different disciplines come together, the Cognitive Virtual Machine will be a reality and Artificial Intelligence will get a major boost at that time!

Have a nice weekend!

Tags: , , ,

Subscribe

If you enjoyed this article, subscribe now to receive more just like it.

Subscribe via RSS Feed

2 Reader Comments

Trackback URL Comments RSS Feed

  1. T Ashok says:

    CVM & MRL – an interesting idea they are ! Interesting you brought to light the knowledge representation schemes of Roger Schank !

    So is the notion of meaning tight or loose ? For eg Wolfram seems ‘tight’ while ChatGPT seems loose, both have their advantages/disadvantages.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top