Concept mining

Concept mining is a discipline at the nexus of data mining, text mining, and linguistics, drawing on artificial intelligence and statistics. It aims to extract concepts from documents. Since at face value documents consist of words and other symbols rather than concepts, the problem is nontrivial, but it can provide powerful insights into the meaning, provenance and similarity of documents.

Methods
Traditionally, the conversion of words to concepts has been performed using a thesaurus, and for computational techniques the tendency is to do the same. The Thesauri used are either specially created for the task, or a pre existing language model, usually related to Princeton's WordNet.

The mappings of words to concepts are often ambiguous. Typically each word in a given language will relate to several possible concepts. We, i.e. humans, use context to disambiguate the various meanings of a given piece of text, where available. Machine translation systems cannot easily infer context, and this gives rise to some of the marvelous howlers such systems generate.

For the purposes of Concept mining however, these ambiguities tend to be less important than they are with Machine Translation, for large documents the ambiguities tend to even out, much as is the case with text mining.

There are many techniques for disambiguation that may be used. Examples are linguistic analysis of the text and the use of word and concept association frequency information that may be inferred from large text corpora.

Applications
One of the spin-offs of calculating document statistics in the concept domain, rather than the word domain, is that concepts form natural tree structures based on hypernymy and meronymy. These structures can be used to produce simple tree membership statistics, that can be used to locate any document in a Euclidean concept space. If the size of a document is also considered as another dimension of this space then an extremely efficient indexing system can be created. This technique is currently in commercial use locating similar legal documents in a 2.5 million document corpus. Standard numeric clustering techniques may be used in "concept space" as described above to locate and index documents by the infered topic. These are numerically far more efficient than their Text mining cousins, and tend to behave more intuitively, in that they map better to the similarity measures a human would generate.
 * Detecting and indexing similar documents in large corpora
 * Clustering documents by topic

Benefits
Text mining models tend to be very large. A model that attempts to classify, for instance, news stories using Support Vector Machines or the Naïve Bayes algorithm will be very large, in the megabytes, and thus slow to load and evaluate. Concept mining models can be minute in comparison - hundreds of bytes.

Software
Concept Mining is very much in a state of flux, but there are a few commercial products in existence:

There is a live demo of detecting plagiarised Blog stories available on the blog plagiarism demo page
 * Scientio's ConceptMine product is an example of an infrastructure component that can add concept mining to other applications.


 * Expert System's Cogito product uses English and Italian enhanced versions of Wordnet to identify the authors intended concept based on an entire sentence, paragraph, webpage or document. Spanish, Germany and Arabic versions will be completed by the end of 2007.  Chinese and other languages are planned for 2008.


 * Nielsen's Buzz Metrics uses related techniques to try to track common concepts and trends in Blogs.


 * ConceptNet A project attempting to extract concept relationships from a large text corpus.


 * PolyAnalyst - commercial data mining/text mining tool, uses WordNet, supports generalizing keywords via hypernymy and other features


 * DrugSense NewsBot - specialized concept mining/text mining, uses over 200 concepts crafted to categorize/classify news articles that relate to illegal drugs. Concepts guide 24/7 back-end spider processes to discover around 800 breaking drug news articles per day. Concepts drive creation of site html pages, news feeds, more. Proprietary "Concept Server" engine used to efficiently find concepts in documents. Inferences generated from concepts detected in articles used in various (drug) news analysis, products. Concept-based automated propaganda analysis.

Related readings

 * Skimming for Context - is the title of a master's thesis in computer science by Ole Torp Lassen, that introduces and illustrates the notion of concept mining that is discussed in this article.