Using Augmented Transition Networks (ATN) for Information Extraction

Written by on November 23, 2019 in LISP, Natural Language Processing, Programming with 0 Comments

After Wood’s paper [1], Augmented Transition Networks (ATN) became popular in the 1970s, for parsing text. An ATN is a generalized transition network with two major enhancements:

  1. Support for recursive transitions, including jumping to other ATNs
  2. Performing arbitrary actions when edges are traversed
  3. Remembering state through the use of registers

See the “Further Reading” section at the end of this article for some good reading material on ATNs. 

Although the ATN formalism is quite powerful, because of its procedural nature, it tends to be fairly verbose. For this reason, designing and maintaining any non-trivial ATN grammar requires considerable effort.

Definite Clause Grammar [5], introduced by Pereira and Warren in 1980, was implemented in Prolog and was shown to be clearer and more compact than ATN for the same problem. (I have also written an article on using DCG in Lisp.)

Without getting into any discussion (or debate) on whether ATNs are still relevant for solving general parsing problems, in today’s article I would like to show how we can use ATNs to extract meaningful information from unstructured text. 

A Simple Grammar

Let me start first with highly simplified English grammar for parsing a sentence made up of a noun phrase and a verb phrase:

Simple ATN Grammar

A Simple ATN Grammar

The above grammar defines 3 ATNs, namely, Sentence, NP and VP. The Sentence ATN has 3 nodes, whereas the NP and VP nodes each have 2 nodes. Control starts at the start node of Sentence, and as part of traversing the next two nodes, the ATNs NP and VP are respectively visited (in essence, like subroutine calls) and if the process is successful, a data structure representing the parsed information is synthesized. Again, I refer you to the reading material mentioned at the end, for learning more about the ATN formalism. Here is a convenient graphical representation of the above grammar:

Visualizing the ATN

Visualizing the ATN

Although it might not be obvious, the ATN parser makes use of a lexicon to guide it. For example, the cat arc checks if the current input word belongs to a lexical category such as verb, noun or adverb. It is the lexicon that provides this information to the parser.

Here are some sample sentences parsed using this grammar:

Parsing Examples

Parsing Examples

I hope you can see why the last example fails. As per the grammar, we are expecting an adverb after the word chased, but the actual input is an article. 

Chunking

Now consider this problem: We are given a sentence and we want to identify the verb phrase alone. We are not interested in anything before or after it. For simplicity, let us assume that the verb phrase is as defined in the above grammar – a verb followed by an adverb. Will the above ATN grammar work? 

The above grammar expects a noun phrase of a fixed pattern before the verb phrase. In this sense, it is a bit strict in accepting input sentences. What if the input sentence had a more complex noun phrase, for example, “my cute little dog”, followed by the verb phrase “ran fast”? Even though the verb phrase matches our pattern, the whole parse will fail because of the incorrect noun phrase. In this case, we are interested in chunking, instead of a full parse.

Chunking, as it is used in NLP, represents a form of shallow parsing where the goal is to extract meaningful fragments from given text. In contrast, traditional full parsing attempts to parse the entire text according to some well-defined grammar of the language, resulting in a complete syntactic structure of the text. This might or might not be the preferred approach, depending on the nature of the given text, and what we are looking to extract. Chunking is of practical significance in extracting information from unstructured text.

The popular Python library NLTK has support for chunking based on part-of-speech (POS) tags. Although this will work in many situations, sometimes we might want to chunk based on keywords instead of using POS tags. In some other situations we might want to combine both POS tags and keywords while mining for some information.

Coming back to our ATN, here is a grammar for extracting just the verb phrase from input sentence:

Chunking Grammar

Chunking Grammar

Here is the corresponding graphical representation:

Graphical Representation

Graphical Representation of the Chunking Grammar

What is different about this grammar is that in the start node, it looks for a verb and ignores other categories. When it sees a verb in the input, it goes to the next node and looks for an adverb. Any non-verb input is ignored (until end of the sentence is reached).

Here is a sample sentence parsed by this grammar:

Chunking: Example 1

Chunking: Example 1

Here is a more interesting parse:

Chunking: Example 2

Chunking: Example 2

Can you spot the difference between this example and the previous one?

You can see that there are two verbs in the second example, namely, shunned and worked. When the first verb shunned is seen, the parser jumps to the adverb node, but since temptations is a noun and not an adverb, it fails. The parser then backtracks and continues to look for the next verb. It finds worked this time, and the next word happens to be an adverb, satisfying the grammar. The extracted verb phrase is “worked harder”. That is the power of the ATN – it can backtrack as needed till it finds (or has to necessarily give up) a proper match in the input. By the way, this can also be a concern from a performance point of view; we want to eliminate (or minimize) backtracking if at all possible because backtracking is costly. Often times, peeking ahead to select a valid option is preferred to blind backtracking, but that is beyond the scope of this article.

Generalized Information Extraction

Now that you have understood how to collect fragments from an input text, let us apply this idea to extract important information from a homeopathy case record. Again, for the sake of illustration, I have deliberately kept the example simple.

Here is the text we want to analyze:

Homeopathy Case Text

Homeopathy Case Text

We would like to extract the following information from this text:

  1. Patient’s age and gender
  2. Modalities: when does the complaint become better or worse?

Here is the ATN to extract gender:

ATN for Gender

ATN for Gender

The following 3 ATNs handle the age

ATNs for Parsing Age

ATNs for Parsing Age

The reason why extracting age is slightly more involved is because we need to handle sentence patterns like these:

…. 50 years old ….

…. 50 years of age ….

…. aged 50 years ….

Finally, the modality is handled by the following ATN:

ATN for Modality

ATN for Modality

Let us parse the given text using the above ATNs.

Case Output

Case Output

The output is as expected. I hope the above examples show the power and expressiveness of ATNs in parsing text. I have implemented the ATN parser in Lispworks Lisp.

Have a great weekend! 

Further Reading

1) Woods, W.A., “Augmented Transition Networks for Natural Language Analysis”, Communications of the ACM, October 1970.

2) Madeline Bates, “The Theory and Practice of Augmented Transition Network Grammars” in “Natural Language Communication with Computers”, Lecture Notes in Computer Science, Edited by Leonard Bloc, Springer-Verlag, 1978. 

3) Finin, T.W., “The Planes Interpreter and Compiler for Augmented Transition Network Grammars”, in “The Design of Interpreters, Compilers, and Editors for Augmented Transition networks”, Edited by Leonard Bloc, Springer-Verlag, 1983.

4) Terry Winograd, Language as a Cognitive process, Volume 1: Syntax, Addison-Wesley, 1983.

5) Pereira F.C.N. and Warren D.H.D., “Definite Clause Grammars for Language Analysis”, Artificial Intelligence 13 (1980).

Tags: , , ,

Subscribe

If you enjoyed this article, subscribe now to receive more just like it.

Subscribe via RSS Feed

Leave a Reply

Your email address will not be published. Required fields are marked *

Top