Wednesday, February 28, 2024
Google search engine
HomeUncategorizedRemembering Doug Lenat (1950–2023) and His Quest to Capture the World with...

Remembering Doug Lenat (1950–2023) and His Quest to Capture the World with Logic

It was definitely a “you’re on my turf” kind of response. And I wasn’t sure what to expect from Doug. But a few days later we had a long call with Doug and some of the senior members of what was now the Cycorp team. And Doug did something that deeply impressed me. Rather than for example nitpicking that Wolfram|Alpha was “not AI” he basically just said “We’ve been trying to do something like this for years, and now you’ve succeeded”. It was a great—and even inspirational—show of intellectual integrity. And whatever I might think of CYC and Doug’s other work (and I’d never formed a terribly clear opinion), this for me put Doug firmly in the category of people to respect.

Doug wrote a blog post entitled “I was positively impressed with Wolfram Alpha”, and immediately started inviting us to various AI and industry-pooh-bah events to which he was connected.

Doug seemed genuinely pleased that we had made such progress in something so close to his longtime objectives. I talked to him about the comparison between our approaches. He was just working with “pure human-like reasoning”, I said, like one would have had to do in the Middle Ages. But, I said, “In a sense we cheated”. Because we used all the things that got invented in modern times in science and math and so on. If he wanted to work out how some mechanical system would behave, he would have to reason through it: “If you push this down, that pulls up, then this rolls”, etc. But with what we’re doing, we just have to turn everything into math (or something like it), then systematically solve it using equations and so on.

And there was something else too: we weren’t trying to use just logic to represent the world, we were using the full power and richness of computation. In talking about the Solar System, we didn’t just say that “Mars is a planet contained in the Solar System”; we had an algorithm for computing its detailed motion, and so on.

Doug and CYC had also emphasized the scraps of knowledge that seem to appear in our “common sense”. But we were interested in systematic, computable knowledge. We didn’t just want a few scattered “common facts” about animals. We wanted systematic tables of properties of millions of species. And we had very general computational ways to represent things: not just words or tags for things, but systematic ways to capture computational structures, whether they were entities, graphs, formulas, images, time series, or geometrical forms, or whatever.

I think Doug viewed CYC as some kind of formalized idealization of how he imagined human minds work: providing a framework into which a large collection of (fairly undifferentiated) knowledge about the world could be “poured”. At some level it was a very “pure AI” concept: set up a generic brain-like thing, then “it’ll just do the rest”. But Doug still felt that the thing had to operate according to logic, and that what was fed into it also had to consist of knowledge packaged up in the form of logic.

But while Doug’s starting points were AI and logic, mine were something different—in effect computation writ large. I always viewed logic as something not terribly special: a particular formal system that described certain kinds of things, but didn’t have any great generality. To me the truly general concept was computation. And that’s what I’ve always used as my foundation. And it’s what’s now led to the modern Wolfram Language, with its character as a full-scale computational language.

There is a principled foundation. But it’s not logic. It’s something much more general, and structural: arbitrary symbolic expressions and transformations of them. And I’ve spent much of the past forty years building up coherent computational representations of the whole range of concepts and constructs that we encounter in the world and in our thinking about it. The goal is to have a language—in effect, a notation—that can represent things in a precise, computational way. But then to actually have the built-in capability to compute with that representation. Not to figure out how to string together logical statements, but rather to do whatever computation might need to be done to get an answer.

But beyond their technical visions and architectures, there is a certain parallelism between CYC and the Wolfram Language. Both have been huge projects. Both have been in development for more than forty years. And both have been led by a single person all that time. Yes, the Wolfram Language is certainly the larger of the two. But in the spectrum of technical projects, CYC is still a highly exceptional example of longevity and persistence of vision—and a truly impressive achievement.

Later Years

After Wolfram|Alpha came on the scene I started interacting more with Doug, not least because I often came to the SXSW conference in Austin, and would usually make a point of reaching out to Doug when I did. Could CYC use Wolfram|Alpha and the Wolfram Language? Could we somehow usefully connect our technology to CYC?

When I talked to Doug he tended to downplay the commonsense aspects of CYC, instead talking about defense, intelligence analysis, healthcare, etc. applications. He’d enthusiastically tell me about particular kinds of knowledge that had been put into CYC. But time and time again I’d have to tell him that actually we already had systematic data and algorithms in those areas. Often I felt a bit bad about it. It was as if he’d been painstakingly planting crops one by one, and we’d come through with a giant industrial machine.

In 2010 we made a big “Timeline of Systematic Data and the Development of Computable Knowledge” poster—and CYC was on it as one of the six entries that began in the 1980s (alongside, for example, the web). Doug and I continued to talk about somehow working together, but nothing ever happened. One problem was the asymmetry: Doug could play with Wolfram|Alpha and Wolfram Language any time. But I’d never once actually been able to try CYC. Several times Doug had promised API keys, but none had ever materialized.

Eventually Doug said to me: “Look, I’m worried you’re going to think it’s bogus”. And particularly knowing Doug’s history with alleged “bogosity” I tried to assure him my goal wasn’t to judge. Or, as I put it in a 2014 email: “Please don’t worry that we’ll think it’s ‘bogus’. I’m interested in finding the good stuff in what you’ve done, not criticizing its flaws.”

But when I was at SXSW the next year Doug had something else he wanted to show me. It was a math education game. And Doug seemed incredibly excited about its videogame setup, complete with 3D spacecraft scenery. My son Christopher was there and politely asked if this was the default Unity scenery. I kept on saying, “Doug, I’ve seen videogames before; show me the AI!” But Doug didn’t seem interested in that anymore, eventually saying that the game wasn’t using CYC—though did still (somewhat) use “rule-based AI”.

I’d already been talking to Doug, though, about what I saw as being an obvious, powerful application of CYC in the context of Wolfram|Alpha: solving math word problems. Given a problem, say, in the form of equations, we could solve pretty much anything thrown at us. But with a word problem like “If Mary has 7 marbles and 3 fall down a drain, how many does she now have?” we didn’t stand a chance. Because to solve this requires commonsense knowledge of the world, which isn’t what Wolfram|Alpha is about. But it is what CYC is supposed to be about. Sadly, though, despite many reminders, we never got to try this out. (And, yes, we built various simple linguistic templates for this kind of thing into Wolfram|Alpha, and now there are LLMs.)

Independent of anything else, it was impressive that Doug had kept CYC and Cycorp running all those years. But when I saw him in 2015 he was enthusiastically telling me about what I told him seemed to me to be a too-good-to-be-true deal he was making around CYC. A little later there was a strange attempt to sell us the technology of CYC, and I don’t think our teams interacted again after that.

I personally continued to interact with Doug, though. I sent him things I wrote about the formalization of math. He responded pointing me to things he’d done on AM. On the tenth anniversary of Wolfram|Alpha Doug sent me a nice note, offering that “If you want to team up on, e.g., knocking the Winograd sentence pairs out of the park, let me know.” I have to say I wondered what a “Winograd sentence pair” was. It felt like some kind of challenge from an age of AI long past (apparently it has to do with identifying pronoun reference, which of course has become even more difficult in modern English usage).

And as I write this today, I realize a mistake I made back in 2016. I had for years been thinking about what I’ve come to call “symbolic discourse language”—an extension of computational language that can represent “everyday discourse”. And—stimulated by blockchain and the idea of computational contracts—I finally wrote something about this in 2016, and I now realize that I overlooked sending Doug a link to it. Which is a shame, because maybe it would have finally been the thing that got us to connect our systems.

And Now There Are LLMs

Doug was a person who believed in formalism, particularly logic. And I have the impression that he always considered approaches like neural nets not really to have a chance of “solving the problem of AI”. But now we have LLMs. So how do they fit in with things like the ideas of CYC?

One of the surprises of LLMs is that they often seem, in effect, to use logic, even though there’s nothing in their setup that explicitly involves logic. But (as I’ve described elsewhere) I’m pretty sure what’s happened is that LLMs have “discovered” logic much as Aristotle did—by looking at lots of examples of statements people make and identifying patterns in them. And in a similar way LLMs have “discovered” lots of commonsense knowledge, and reasoning. They’re just following patterns they’ve seen, but—probably in effect organized into what I’ve called a “semantic grammar” that determines “laws of semantic motion”—that’s enough to often achieve some fairly impressive commonsense-like results.

I suspect that a great many of the statements that were fed into CYC could now be generated fairly successfully with LLMs. And perhaps one day there’ll be good enough “LLM science” to be able to identify mechanisms behind what LLMs can do in the commonsense arena—and maybe they’ll even look a bit like what’s in CYC, and how it uses logic. But in a sense the very success of LLMs in the commonsense arena strongly suggests that you don’t fundamentally need deep “structured logic” for that. Though, yes, the LLM may be immensely less efficient—and perhaps less reliable—than a direct symbolic approach.

It’s a very different story, by the way, with computational language and computation. LLMs are through and through based on language and patterns to be found through it. But computation—as it can be accessed through structured computational language—is something very different. It’s about processes that are in a sense thoroughly non-human, and that involve much deeper following of general formal rules, as well as much more structured kinds of data, etc. An LLM might be able to do basic logic, as humans have. But it doesn’t stand a chance on things where humans have had to systematically use formal tools that do serious computation. Insofar as LLMs represent “statistical AI”, CYC represents a certain level of “symbolic AI”. But computational language and computation go much further—to a place where LLMs can’t and shouldn’t follow, and should just call them as tools.

Doug always seemed to have a very optimistic view of the promise of AI. In 2013 he wrote to me:

Of course you are coming at this from the opposite end of the Chunnel than

we are, but you’re proceeding, frankly, much more rapidly toward us than we

are toward you. I probably appreciate the significance of what you’ve

accomplished more than almost anyone else: when your and our approaches do

meet up, the combination will be the existence of real AI on Earth. I

think that’s the main motivation in your life, as it is in mine: to live to

see real AI, with the obvious sweeping change in all aspects of life when

there is (i) cradle-to-grave 24×7 Aristotle mentoring and advising for

every human being and, in effect, (ii) a Land of Faerie intelligence

effectively present [e.g., that one can converse with] in every door, floor

tile,…every tangible object above a certain microscopic size.) And to

live to see and be users ourselves in an era of massively amplified human

intelligence …

Read More



Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments