Symbolic Generative AI

We claim that symbolic, rule-based AIs – millions of which have already been deployed since the phrase ‘expert system’ became popular in the 1980s – can and should be repurposed as generative AIs, enabled by recent advances in mathematics and computer science. Although how to do so has been well-known to computer scientists since the 1970s, it is only now post 2020 that computers and algorithms are fast enough to take advantage of it.

How does this work? ‘Generativity’ is about generating new from old. In the case of ChatGPT, new English text (the response to a question) is generated from old English text (the training corpus). In the case of Stable Diffusion, new images are generated from old images. Although ChatGPT and StableDiffusion operate by statistical, heuristic methods, symbolic AIs can perform many of the same generative tasks, as evidenced by the eerie similarity between the popular discourse around the ‘Eliza’ chatbot of the 1980s and today’s ChatGPT.

Generativity is not about machine learning: it is about generating new things, as opposed to simply answering questions. And for the most part, the same sets of rules that define expert systems can be re-used for generative purposes, we need only change the underlying algorithms that we run on them: rather than run deduction/inference/entailment algorithms, we need only run model completion/‘chase’ algorithms. As we explain in this talk, only some logics admit model completion, providing concrete guidance about which logics (and therefore technologies) to use for generative symbolic AI purposes.

Share This