Software module for representing, searching, and reasoning about everyday common-sense knowledge

Our computers and AI systems can now do many impressive things, but they still lack what we humans call "common sense" and the ability to understand the full meaning of text in English and other human languages. These tasks require the ability to represent complicated general knowledge in the machine – and a lot of it. We humans know about tables, chairs, cars, trucks, doors, locks, keys, stop signs, sharp knives, policemen, electric outlets… We not only know the properties of these things, but we know what they can do, what we can do with them, and what we shouldn't do. We assume that all other normal humans in our culture know these things too – if they don't, we say that they lack common sense.

Most of our current AI systems know very few of these things, if any. This leads to AI systems that behave like experts in certain narrow areas, but that can be hard to deal with and prone to foolish blunders that no human would ever make.

We can not only represent and regurgitate our stored knowledge – we can reason with it. If I tell you that "Clyde" is an elephant, you suddenly appear to know a lot about Clyde, without being told: He is gray, a mammal, and definitely not a plant. He has four legs, a liver, and a backbone. He can move around, must eat occasionally, and may be dangerous to nearby humans if he is untrained. He would not be a good pet for someone living in a small apartment.

Language understanding depends on this collection of background knowledge as well. As we humans listen to a seemingly simple sentence – "The tanks fired on the demonstrators" – we are constantly (usually without conscious mental effort) disambiguating the words (what kind of "tanks"?), filling in missing elements, fitting this into a bigger picture, and making predictions: "How many demonstrators were injured?" It is our background knowledge, plus our ability to efficiently reason about this knowledge, that make this all possible.

Dr. Scott Fahlman, a Research Professor at Carnegie Mellon University, has been working on the inter-related problems of knowledge, common-sense reasoning, planning, and language understanding for many years. His research group has developed Scone, a high-performance, open-source "knowledge-base" system. Scone is meant to serve as a software component – a sort of "smart memory" system – in AI applications in many areas. It not only represents general knowledge in symbolic form, but it has built-in search and reasoning capabilities.

One unique feature of Scone is its "multiple-context" facility, which enables it to represent many distinct but similar world-models in the same memory system, without getting them mixed up. This allows Scone to represent information that is true in one time or place but not another, different points of view, hypotheses, fantasy worlds, and so on.

Scone has been running for some time. It has been used in a number of projects internal to Carnegie Mellon, and by some external partners, including a group at the University of Castilla-La Mancha in Spain, who are applying Scone in a number of “smart city” applications. But there is much left to do on this project.

Current projects include:

  • Making Scone a well-supported open-source resource for researchers and for builders of commercial applications: In order to make Scone available to all, Dr. Fahlman plans to write a low-cost tutorial Ebook and clean up some of the existing Scone software. Ideally, Dr. Fahlman would add one well-trained project member to focus on user support.
  • Extending Scone's "core" collection of general, common-sense knowledge: In this project, Dr. Fahlman is building tools to import knowledge from existing collections in a variety of different formats, including WordNet, dictionary definitions, and online tables.
  • Extending Scone's capabilities for representing and reasoning about "episodic knowledge" such as actions, events, sequences, plans, recipes for action, time durations, etc: Currently, Scone is well-developed for static knowledge and has basic support for actions and events, but Dr. Fahlman plans to expand this and add a resilient, recipe-driven planner.
  • Completing the system for natural-language understanding (NLU), from text or speech to a knowledge representation we can reason about: Dr. Fahlman’s team has already developed several prototype NLU systems. The goal now is to combine the best features of these prototypes and fully integrate them with the Scone knowledge base system. That will make it possible for the NLU system to resolve ambiguities as each new phrase arrives, and for users to add new knowledge to Scone simply by telling it things in English (or some other human language).

Professor Scott Fahlman's fascination with robots began when he read Isaac Asimov's robot novels in the third grade. This interest gradually expanded to a deeper and more general curiosity about how human intelligence works and what mechanisms are responsible.

As a grad student in the MIT AI Lab in the 1970s, he focused on understanding and trying to replicate "human-like" intelligence – first trying to replicate the very flexible and resilient human capability for planning and problem solving, and then working on ways to store large amounts of general knowledge in a machine, and to make that knowledge play a useful and efficient role in human-like common-sense reasoning.

After getting his doctorate in Artificial Intelligence in 1977, he joined the faculty of Carnegie Mellon University, first in the Computer Science Department and now as a Research Professor in CMU's Language Technologies Institute.

In his career as a researcher, Prof. Fahlman has worked in many areas of AI: knowledge representation, natural language understanding, planning, image processing, machine learning, artificial neural networks, intelligent user interfaces, and the use of novel parallel computer designs to solve AI problems. The common goal of all these efforts has been to understand human-like intelligence from a computational perspective, and to replicate at least some parts of this.

He is particularly interested in how to handle the things that we humans do without any apparent mental effort, but that require enormous amounts of computation when computers do them. We still cannot replicate most of these capabilities in our computers, even after more than 50 years of research on AI. Prof. Fahlman, however, believes that real progress in this area is finally within reach. "Deep Learning" neural-network systems are part of the answer, but for higher-level symbolic thought and true understanding of human language, he believes that something like his Scone knowledge-base system will be a necessary part of the solution.
When not working on AI, Prof. Fahlman enjoys writing, cooking, and photography. He has achieved some notoriety as the person who proposed the :-) and :-( emoticons for use in online messages back in 1982 – a frivolous ten-minute post that went viral, spread around the world, and gradually evolved into today's emoticons and emoji.

For more information, visit http://www.cs.cmu.edu/~sef/

Outstanding Technology Contributions Award, 2013

Web Intelligence Consortium

Elected Fellow, 2003

Association for the Advancement of AI (AAAI)