How does this work? ‘Generativity’ is about generating new from old. In the case of ChatGPT, new English text (the response to a question) is generated from old English text (the training corpus). In the case of Stable Diffusion, new images are generated from old images. Although ChatGPT and StableDiffusion operate by statistical, heuristic methods, symbolic AIs can perform many of the same generative tasks, as evidenced by the eerie similarity between the popular discourse around the ‘Eliza’ chatbot of the 1980s and today’s ChatGPT.
Generativity is not about machine learning: it is about generating new things, as opposed to simply answering questions. And for the most part, the same sets of rules that define expert systems can be re-used for generative purposes, we need only change the underlying algorithms that we run on them: rather than run deduction/inference/entailment algorithms, we need only run model completion/‘chase’ algorithms. As we explain in this talk, only some logics admit model completion, providing concrete guidance about which logics (and therefore technologies) to use for generative symbolic AI purposes.