INTRODUCTION




	  Language processing is translating from one language to another, or
	  performing an internal computer action in response to an external
	  input.  Language is the essence of interaction between the computer
	  and computer users.

		    Language also serves as communication among parts and functions
		    in the computer.  In addition, all that comes from the computer
		    is encoded in a language of one form or another.

	  Funny thing about the computer.  To get it to do anything, you must
	  tell it what you want.  We increase the computer's usefulness by
	rendering its languages more human oriented.  The ~struggle ~to
	~communicate with the computer is the source of almost all
	frustration and bugs.  That struggle forces us often to say not quite
	what we mean.


	  pic


	Let's look upon this book as a sortie into the essentiality of
	  computers, language.	Let's understand languages well, and see the
	many places where computer languages may turn in the future.  Also,
	let's see how you may yourself feel at ease in evolving one of your
	  own.  Let's explore them and get to know them as well as any other
	specific practical science.  In this book, we will consider
	enrichments of this pivotal process that provide profound
	practical results.


Programming Languages

	The best known category of computer languages is ~programming
	~languages.  In common to all is the ability to command and
	regulate the computer's actions.  Certainly, our most complex
	  interaction with the computer is via ~programming.

	  In the following pages, we will attempt to apply old, new, and newer
	  untapped rich language processing techniques available for natural
	  (human) language to programming languages.  Reducing the struggle
	  to communicate is above all a great ~bug ~killer.


Machine Language - The Computer's Native Tongue

	One programming language comes with any machine since the modern
	computer was born.  It is called ~machine ~language.  All other
	languages are alien to the computer, and require software to exist.
	All the computer ever does is to proceed by machine language.


Why Have More Than One Language?

	The biggest reason for inventing a new computer language is that
	machine language is so devastatingly detailed and inflexible.  To
	get anything done, one must specify ~correctly enormously many
	details.  This most often far exceeds what one wants to impart.

	Any computer language may require more detail than you like.  The
	trick to choosing or inventing a language is to minimize the amount
	of detail required ~above ~and ~beyond ~your ~computer ~application.
	A perfect fit would require nothing beyond the normal language you
	use in describing your domain.  (In the discipline of plane
	geometry, for example, we would like to talk about ~lines, ~angles,
	~polygons, etc., but ~not such foreign concepts as ~machine
	~addresses).

	Different languages hide different details.  Sometimes, different
	things are expressed most clearly in different languages.  Such is the
	case in the specialized languages of science (e.g., physics,
	chemistry, etc.).  The multilingual universe of disciplines doesn't
	  rule out a central language that gives access to them all.


The Greatest Mismatch Between Humans And Computers

	  One way of looking at this mismatch is that humans are ~ambiguous
	  whereas computers are not.

	  Humans respond by ~multiple ~interpretations to any stimulus.  For
	  example, you say one thing and the listener associates many
	  interpretations.  The listener keeps all of those imagined, not
	  necessarily real associations in mind, and so upon hearing the next
	  utterance, he or she can then begin to sensibly disambiguate, choosing
	  among the associations.

	  If the listener were forced to disambiguate among them ~prematurely,
	  a misunderstanding must almost always arise, since no two humans
	  mentate precisely together.	 At all times, to anticipate the future,
	  ambiguity (multiple interpretations) proves very useful.

	  The need to retain multiple interpretations is illustrated:
	  After hearing the sentence:

		    "The car and apple are red.  It ..."

	  the word "it" has two possible meanings.  If the second sentence is:

		    "It drives well."

	  we retain the association between "it" and "car".  In contrast, if the
	  second sentence were:

		    "It tastes good."

	  the "it" and "apple" association is retained instead.

	  If you were forced to choose one association prematurely, before
	  seeing the second sentence, an irreversable misunderstanding could
	  arise.  For example, if you retain only the "it"/"apple" association,
	  the second sentence is meaningless, as "Apple drives well." doesn't
	make much sense.

	As any food caterer knows, even ~red tastes good.  So there are
	many implied ambiguities beyond the one considered in our example.

		We're just starting with one of the information world's
		greatest problems, ambiguity, and as a consequence, the
		challenging complexity of disambiguation.  However, let us
		never underestimate the complexity of ambiguity and the
		potential information breakthroughs made possible by
		disambiguation pertinent to each specific task.

	Ambiguity, multiple interpretations, is a supremely essential
	technique that fascilitates rich but brief communication.  Having the
	computer support such communication brings the computer much closer
	to the human.


Summary Of Contents

	Part 1 introduces techniques for defining new languages.  Part 2
	extends this to introduce compilers, languages that translate all the
	way into the computer's native tongue, machine language.

	  Part 3 introduces memory, and a surprising efficiency in representation
	  that mimics a structure of the brain (figure ??).  Such concise
	  representation is then employed in part 4, where we tolerate enormous
	  amounts of ~ambiguity in rendering understandings.

	  The tolerance of ambiguity greatly simplifies the language processing
	  task, and supports languages without restriction.

	  Part 5 introduces the notion of ~functions, shared programs.  Read
	  this first if you're not familiar with the computer science term.
	The example compiler from Part 2 is augmented now to include functions.

	Part 6 shows all of the programming language ICL, the flexible
	language used throughout this text for semantic specification.

	Part 7 presents advanced topics in language processing.  There,
	we see the uses and advantages of ~ambiguous ~semantics.

	Part 8 covers ~memory ~management, a necessity that facilitates
	everything else in this book, and thus completes the full view of
	the science of practical language processing.


	  pic


Graduated Language Examples

	Naturally, Part 1 introduces example languages.  More complex examples
	appear in Part 2, and even more so in Part 7.  The biggest example
	is the language ICL, shown in Part 6.  ICL is sufficiently big that we
	don't show its implementation in detail.	Instead, we present it in
	  terms of summaries and a reference guide.  It is still described,
	  though, in terms of rules of grammar.


For Computer Novices

	  To get started, newcommers might like to explore some basic questions,
	  like:

		    How do ~hardware and ~software meet?	See Chapters 5 and 6.

		    What is a great software technique?  Functions. See Chapter 18.

		    How do we represent human information in computer memory?  See
		    Chapters 9 and 10.

		    What are the greatest disasters, the hardest bugs to diagnose
		    in software?	See Chapter 31.


Part I  -  Language Definition

	  We start out by providing for the definition of new and richly
	  varied languages customizable for any application.

	  Languages are often dissected into what are known as the ~syntax and
	  the ~semantics of the language.  The ~syntax of a language is the
	  actual ~notations used to express ideas in the language, including
	  punctuation and word order.	 The ~semantics covers everything else,
	  the ~meaning behind those notations.

	  A long standing and powerful notation for specifying the syntax or
	  ~grammar of languages is known as BNF (Backus-Naur Form).	 Chapter 1
	  introduces BNF, and also a variation of BNF that allows for the
	  inclusion of ~semantic specification.

	  While BNF is elegantly simple for syntax, there has been thus far no
	  equally simple and clear notation for expressing the ~semantics of
	  languages.  The trouble with semantics is their required generality.
	  Since we say that semantics is ~everything ~beyond syntax, we pass
	  the buck, possibly the hard stuff, into the domain of semantics.

	  Semantic specification requires all the power of a general programming
	  language.	 We use a programming language called ICL that was designed
	  especially to ease and clarify semantic specification.  As ICL is used,
	  we introduce its notations and meanings.

	  Chapter 2 augments our notion of semantics to include ~actions as well
	  as ~values.  Often, meanings are specified most clearly as actions.

	  Chapter 3 presents a special case of syntax known as ~precedence.
	  Precedence simplifies the specification of syntax, and it renders more
	  efficient parsing (syntax processing).

	  The very short chapter 4 is a sign post.  We introduce an easement in
	  the specification of semantics that we use throughout the remainder
	  of the book.


Part 2 - Compilation

	  The computer's native tongue is called ~machine ~language.
	~Compilation usually refers to translations that end in that native
	tongue.

	Compilation often affords a hundredfold increase in efficiency over
	other methods for handling high-level languages, languages besides
	machine language.  Chapter 5 introduces compilation.

	(For even more speed in special cases, ~silicon ~compilation can be
	used to translate all the way into complete ~microchip designs).

	Chapter 6 introduces machine language.  When you understand machine
	language, you understand how computers work.  Also introduced is
	~assembly ~language, an important step above machine language.

	Chapter 7 introduces ~embedded ~assembly ~language.  This is a
	flexible and extensible form of assembly language.  Embedded within
	the programming language ICL, the language supports the buildup
	of ~abbreviations.  They raise the level of assembly language
	arbitrarily high.  Services known as ~macros and ~conditional
	~assembly are supported here.

	Chapter 8 introduces a complete compiler, a high-level programming
	language that translates all the way into machine language.  The
	~embedded ~assembler in ICL provides the semantic support.


Part III - Memory

	This part introduces an associative memory, like the human's most
	  natural form of memory.

	  The brain has a richness in its interconnectivity.	The ends of two
	  or more neurons may meet, and ~share space.  Such shared organizations
	  represent information surprisingly efficiently.  An exponential number
	  of concepts can be represented simultaneously using only a linear
	  amount of space or memory.

	  This sharedness reflects an important part of human information,
	  namely, that all concepts share some information in common with other
	  concepts.	 Our representation and processing in Part IV takes great
	  advantage of that.

	  Sharedness in computer memory is introduced in chapter 9.	 How we
	  might like such memory to ~evolve is covered in chapter 10.  We ask
	  what ~modification can mean in associative memory.


	  pic



Part IV - Language Processing I

	  A major language processing task is to make sense of a given input text
	  in terms of its syntax.  Great difficulty can be encountered in this
	  endeavor.

	  Ambiguity, a multiplicity of interpretations, arise when examining
	  portions of the given input text.	 Even in ~programming ~languages,
	  ambiguities may arise as well, ~at ~least ~during ~the ~language
	  ~translation ~task.

	  Chapter 11 introduces ambiguity.	Chapter 12 shows a
	  general parser that, like any parser, renders an understanding of a
	  given piece of text in terms of its syntax.  Our simple and effective
	  parser tolerates ambiguity wholeheartedly, and that greatly simplifies
	  the language processing task.

	  Chapter 13 provides a proof of correctness for the parser.
	  The proof is possible because this parser uses only ~subjective
	  ~modification, one of two forms of ~modification introduced in Part 3.

	  We modify the parser in chapter 14 to assure that it always terminates,
	  at least for the most popular form of grammar, ~context-free grammars.

	  Surprisingly, achieving termination requires the emergence of
	  ~ambiguous ~semantics.  This is explored in chapter 15.  We discover
	  that exponentially many (or even infinitely many) ~meanings can be
	  represented in only a polynomial amount memory.

	  Efficiency enhancements for the parser appear in chapter 16.  Chapter
	  17 covers miscellaneous topics, including a general way to report
	  syntax errors for any language implemented by the parser.


Part V - Shared Programs - Functions

	  Some of the greatest leverage in computer programming arises by
	  sharing programs.  We explored ~shared ~data in Part 3.  As with
	  data, one piece of program can be employed from many independent points
	  of view.

	  Such shared pieces of programs are called ~functions, ~procedures,
	  or ~subroutines.  If you're not familiar with at least one of these
	computer science terms, read the first section in chapter 18 before
	anything else.  ~Parameters and ~local ~variables are introduced there.

	Chapter 18 introduces functions, and shows how to implement them.
	Chapter 19 augments the compiler from chapter 8 so as to include
	functions.


Part VI - ICL

	Throughout this text, languages are implemented completely, with
	syntax expressed in our variation of BNF, and with semantics specified
	in ICL.  One of ICL's original design goals was to render semantic
	  specification as clear, concise, and easy as BNF is for syntax.

	  ICL's original design goals included also the need to ease the
	specification of microchip designs.  Many constructs, some suggested
	by Ivan Sutherland, arose to meet this goal.  Except for one
	notation, all those microchip features have been used profusely in
	other applications, and so are presented here.  ICL was born at
	Caltech in 1977.

	The main advantage of ICL is the reduction of the number and
	severity of bugs.  Programs in ICL come with important guarantees.

	ICL is a ~strongly ~typed language, which keeps the
	computer from confusing data of different species (types).  While the
	computer needs to obey these distinctions to work, ICL offers ways
	for the programmer to gloss completely over those distinctions.

	That easement is due in part to unrestricted ~polymorphism and
	~coercion.  They serve to ~heal ~the ~wounds of type-checking.
	(~Type ~checking is often perceived as demanding too much detail).

	Chapter 23 shows an example of a major program modification implemented
	easily and safely by a pair of coercions.  The computer is thus made
	more manageable to the human.

	ICL's model of memory is based on that presented in Part 3.	 Annoying
	  side-effects, happened upon accidentally via the use of ~pointers, are
	  avoided.	Pointers are used profusely in the implementation, for great
	  efficiency, but the programmer needn't be aware of them.  ICL never
	mishandles a pointer, and thus avoids the world's most common and
	  frustrating error: "access violation" or "bus error", which mean the
	  program tried to access an invalid address.

	  Except by a special notation that stands out clearly, and which is
	  required relatively rarely, all notations are free of unseen side-
	  effects.	This property translates in practice to fast debugging, as
	  ~proofs ~of ~program ~correctness become possible and practical in this
	  straightforward model.  An ~airtight quality comes with ICL programs.

	  Chapter 20 considers the creation of new programming languages like
	  ICL, including the basic parts of programming languages.

	  A relatively brief overview of ICL is offered in chapter 21.  Chapter
	  22 details all of ICL excluding ~types of data you can declare.

	  In contrast, Chapter 23 documents types and type-specific expressions.


Part VII - Advanced Topics In Language Processing

	  On many occasions, it is advantageous to require that a
	  specification make sense simultaneously in two or more languages.
	  For example, one might like a new building to make sense both
	  ~structurally and ~financially.

	  Chapter 24 introduces this notion, and lays the groundwork for the
	  main example used throughout this Part.

	  Chapter 25 introduces new notations into ICL that render ~phrase
	  ~generation clear and easy.	 This capability makes possible the
	  simultaneous adherence of an input to the requirements of multiple
	  languages.

	  Chapter 26 presents an example language that involves datatypes, like
	  ICL does.	 The two domains involved are ~syntax and ~datatypes.

	  Chapter 27 shows how to process efficiently languages whose semantics
	  ~generate ~phrases in other languages.	This renders practical
	  ~chains ~of ~languages, where one translates into the next.

	  Chapter 28 shows what to do at the ~last language in such a chain.
	  There, finally, ambiguity may need to be resolved via ~common
	  ~sense.

	  Both chapter 27 and 26 work in the context of ambiguous semantics,
	  as is necessary for the great savings in compute time.

	  Chapter 29 shows how to specify semantics directly in the new language
	  being designed.	 This contrasts specifying semantics always in ICL.

	  Chapter 30 considers reporting semantic errors.  In particular, for
	  our example, this refers to reporting ~datatype ~errors.


Part VIII - Memory Management

	  This part introduces no new language processing techniques, but
	  introduces ~automatic ~memory ~management, or ~garbage ~collection.
	  Such is required implicitly to implement all our work up to now.
	  Automatic memory management is required in non-trivial computer
	  applications in general.  Perhaps the single greatest contribution of
	  the programming language LISP was its use of garbage collection.

	  Chapter 31 introduces the need for automatic memory management, as it
	  kills the by far most severe class of bugs.  Chapter 32 considers
	  garbage collection for ~fixed-sized ~blocks of memory.  An efficient
	  way to handle large databases, using the disk, is provided in chapter
	  33.	 Chapter 34 shows how to garbage collect ~variable-sized ~blocks.

	  Chapter 35 shows a way to ~organize a large database implemented as
	  disk files.  New and old versions of a database may be efficiently
	  retained simultaneously.

	  Finally, ~incremental ~garbage ~collection is presented to spread
	  out the cost of garbage collection.  If one doesn't use techniques like
	in chapter 33, one relies on ~virtual ~memory to implement large
	databases.  The cost of garbage collections in this context is
	relatively high, but useful.


	THESE ARE TO BE PLACED AT THE BEGINNING OF EACH PART



Part I  -  Language Definition

	We start out by providing for the definition of new and richly
	varied languages customizable for any application.

	Languages are often dissected into what are known as the ~syntax and
	the ~semantics of the language.  The ~syntax of a language is the
	actual ~notations used to express ideas in the language, including
	punctuation and word order.  The ~semantics covers everything else,
	the ~meaning behind those notations.

	A long standing and powerful notation for specifying the syntax or
	~grammar of languages is known as BNF (Backus-Naur Form).  Chapter 1
	introduces BNF, and also a variation of BNF that allows for the
	inclusion of ~semantic specification.

	While BNF is elegantly simple for syntax, there has been thus far no
	equally simple and clear notation for expressing the ~semantics of
	languages.  The trouble with semantics is their required generality.
	Since we say that semantics is ~everything ~beyond syntax, we pass
	the buck, possibly the hard stuff, into the domain of semantics.

	Semantic specification requires all the power of a general programming
	language.  We use a programming language called ICL that was designed
	especially to ease and clarify semantic specification.  As ICL is used,
	we introduce its notations and meanings.

	Chapter 2 augments our notion of semantics to include ~actions as well
	as ~values.  Often, meanings are specified most clearly as actions.

	Chapter 3 presents a special case of syntax known as ~precedence.
	Precedence simplifies the specification of syntax, and it renders more
	efficient parsing (syntax processing).

	The very short chapter 4 is a sign post.  We introduce an easement in
	the specification of semantics that we use throughout the remainder
	of the book.


Part 2 - Compilation

	The computer's native tongue is called ~machine ~language.	Translation
	  that ends in that native tongue is called ~compilation.

	  We apply the ideas in Part 1 to this wonderful class of translations.

	  Compilation often affords a hundredfold increase in efficiency over
	  other methods for handling high-level languages, languages besides
	  machine language.  Chapter 5 introduces compilation.

	  (For even more speed in special cases, ~silicon ~compilation can be
	  used to translate all the way into complete ~microchip designs).

	  Chapter 6 introduces machine language.	When you understand machine
	  language, you understand how computers work.	Also introduced is
	  ~assembly ~language, an important step above machine language.

	  Chapter 7 introduces ~embedded ~assembly ~language.	 This is a
	  flexible and extensible form of assembly language.	Embedded within
	  the programming language ICL, the language supports the buildup
	  of ~abbreviations.  They raise the level of assembly language
	  arbitrarily high.  Services known as ~macros and ~conditional
	  ~assembly are supported here.

	  Chapter 8 introduces a complete compiler, a high-level programming
	  language that translates all the way into machine language.  The
	  ~embedded ~assembler in ICL provides the semantic support.


Part III - Memory

	  Let's take a break for the moment from language processing.

	This part introduces an associative memory, like the human's most
	  natural form of memory.

	  The brain has a richness in its interconnectivity.	The ends of two
	  or more neurons may meet, and ~share space.  Such shared organizations
	  represent information surprisingly efficiently.  An exponential number
	  of concepts can be represented simultaneously using only a linear
	  amount of space or memory.

	  This sharedness reflects an important part of human information,
	  namely, that all concepts share some information in common with other
	  concepts.	 Our representation and processing in Part 4 takes great
	  advantage of that.

	  Sharedness in computer memory is introduced in chapter 9.	 How we
	  might like such memory to ~evolve is covered in chapter 10.  We ask
	  what ~modification can mean in associative memory.

	  Two distinct forms of modification are developed.  The ~subjective
	  form is very safe, and fortunately, it makes up the vast majority
	  of modifications desired in computer programs.  It is used always,
	  for example, when dealing with individual numbers.

	  The other form, ~objective ~modification, is also desired sometimes,
	  but can have potentially devasting side effects.

	  By making safe, subjective modification the default, as ICL does,
	  what you see is what you get when looking at a program listing.	 All
	  side-effects are visible in the listing.

	  Objective modification requires always a ~special notation.  Thus, the
	  relatively few potentially dangerous modifications stand out clearly
	  in the program listing.

	  Fast debugging, and proofs of program correctness become possible now.
	  The intricate details of computer memory and interactions are factored
	  out of the picture.


Part IV - Language Processing I

	  So far, we've seen how to define some languages, from Parts 1 and 2.
	Let's now explore how to implement languages so defined.

	  A major language processing task is to make sense of a given input text
	  in terms of its syntax.  Great difficulty can be encountered in this
	  endeavor.

	  Ambiguity, a multiplicity of interpretations, arise when examining
	  portions of the given input text.	 Even in ~programming ~languages,
	  ambiguities may arise as well, ~at ~least ~during ~the ~language
	  ~translation ~task.

	  Chapter 11 introduces ambiguity.	Chapter 12 shows a
	  general parser that, like any parser, renders an understanding of a
	  given piece of text in terms of its syntax.  Our simple and effective
	  parser tolerates ambiguity wholeheartedly, and that greatly simplifies
	  the language processing task.

	  Chapter 13 provides a proof of correctness for the parser.  The
	  parser's consistent use of subjective modification makes this possible.

	We modify the parser in chapter 14 to assure that it always terminates,
	at least for the most popular form of grammar, ~context-free grammars.
	A polynomial upper bound, as a function of the length of the given
	input text, is developed.  We discover that in practice,
	the non-linear (polynomial) behavior is tolerable, and enhances
	clarity of language.

	Surprisingly, achieving termination requires the emergence of
	~ambiguous ~semantics.  This is explored in chapter 15.  We discover
	that exponentially many (or even infinitely many) ~meanings can be
	represented in only a polynomial amount memory.

	Efficiency enhancements for the parser appear in chapter 16.  Chapter
	17 covers miscellaneous topics, including a general way to report
	syntax errors for any language implemented by the parser.


Part V - Shared Programs - Functions

	Some of the greatest leverage in computer programming arises by
	sharing programs.  We explored ~shared ~data in Part 3.  As with
	data, one piece of program can be employed from many independent points
	of view.

	Such shared pieces of programs are called ~functions, ~procedures,
	or ~subroutines.  If you're not familiar with at least one of these
	  computer science terms, read the first section in chapter 18 before
	  anything else.	~Parameters and ~local ~variables are introduced there.

	  Chapter 18 introduces functions, and shows how to implement them.
	  Chapter 19 augments the compiler from chapter 8 so as to include
	  functions.


Part VI - ICL

	  Throughout this text, languages are implemented completely, with
	  syntax expressed in our variation of BNF, and with semantics specified
	  in ICL.  One of ICL's original design goals was to render semantic
	specification as clear, concise, and easy as BNF is for syntax.

	ICL's original design goals included also the need to ease the
	  specification of microchip designs.  Many constructs, some suggested
	  by Ivan Sutherland, arose to meet this goal.	Except for one
	  notation, all those microchip features have been used profusely in
	  other applications, and so are presented here.  ICL was born at
	  Caltech in 1977.

	  The main advantage of ICL is the reduction of the number and
	  severity of bugs.  Programs in ICL come with important guarantees.

	  ICL is a ~strongly ~typed language, which keeps the
	  computer from confusing data of different species (types).  While the
	  computer needs to obey these distinctions to work, ICL offers ways
	  for the programmer to gloss completely over those distinctions.

	  That easement is due in part to unrestricted ~polymorphism and
	  ~coercion.  They serve to ~heal ~the ~wounds of type-checking.
	  (~Type ~checking is often perceived as demanding too much detail).

	  Chapter 23 shows an example of a major program modification implemented
	  easily and safely by a pair of coercions.  The computer is thus made
	  more manageable to the human.

	  ICL's model of memory is based on that presented in Part 3.  Annoying
	side-effects, happened upon accidentally via the use of ~pointers, are
	avoided.  Pointers are used profusely in the implementation, for great
	efficiency, but the programmer needn't be aware of them.  ICL never
	  mishandles a pointer, and thus avoids the world's most common and
	frustrating error: "access violation" or "bus error", which mean the
	program tried to access and invalid address.

	Except by a special notation that stands out clearly, and which is
	required relatively rarely, all notations are free of unseen side-
	effects.  This property translates in practice to fast debugging, as
	~proofs ~of ~program ~correctness become possible and practical in this
	straightforward model.  An ~airtight quality comes with ICL programs.

	Chapter 20 considers the creation of new programming languages like
	ICL, including the basic parts of programming languages.

	A relatively brief overview of ICL is offered in chapter 21.  Chapter
	22 details all of ICL excluding ~types of data you can declare.

	In contrast, Chapter 23 documents types and type-specific expressions.
	The types are presented in an extensible manner, so that entirely
	new classes of ~types could be introduced in a modular fashion.


Part VII - Advanced Topics In Language Processing

	So far, we've seen how to define moderately complex languages in Parts
	  1 and 2.	We now extend this endeavor to much richer languages,
	  languages that are themselves defined most clearly in terms of ~more
	  ~than ~one domain or language.

	  On many occasions, it is advantageous to require that a
	  specification make sense simultaneously in two or more languages.
	  For example, one might like a new building to make sense both
	  ~structurally and ~financially.

	  Chapter 24 introduces this notion, and lays the groundwork for the
	  main example used throughout this Part.

	  Chapter 25 introduces new notations into ICL that render ~phrase
	  ~generation clear and easy.	 This capability makes possible the
	  simultaneous adherence of an input to the requirements of multiple
	  languages.  All the while, these notations support ambiguity
	  implicitly throughout the translation process.

	  Chapter 26 presents an example language that involves datatypes, like
	  ICL does.	 The two domains involved are ~syntax and ~datatypes.	 An
	  accepted program specification must make sense in both of those domains
	  simultaneously.

	  Chapter 27 shows how to process efficiently languages whose semantics
	  ~generate ~phrases in other languages.	This renders practical
	  ~chains ~of ~languages, where one translates into the next.  There,
	  we re-employ our syntax processor to assist in semantic processing.
	  Successful translations thru chains of languages are made possible
	  by our uniform support of ambiguity, multiple interpretations.	This
	  is done in polynomial time even though exponentially many meanings
	  may be involved.

	  Chapter 28 shows what to do at the ~last language in such a chain.
	  There, finally, ambiguity may need to be resolved via ~common
	  ~sense.  That processing also involves only polynomial expense, even
	  though it may process exponentially or even infinitely many meanings.

	  Both chapter 27 and 26 work in the context of ambiguous semantics,
	  as is necessary for this major savings in compute time.

	  Chapter 29 shows how to specify semantics directly in the new language
	  being designed.	 This contrasts specifying semantics always in ICL.

	  Chapter 30 considers reporting semantic errors.  In particular, for
	  our example, this refers to reporting ~datatype ~errors.


Part VIII - Memory Management

	  This part introduces no new language processing techniques, but
	  introduces ~automatic ~memory ~management, or ~garbage ~collection.
	  Such is required implicitly to implement all our work up to now.
	  Automatic memory management is required in non-trivial computer
	  applications in general.  Perhaps the single greatest contribution of
	  the programming language LISP was its use of garbage collection.

	  Chapter 31 introduces the need for automatic memory management, as it
	  kills the by far most severe class of bugs.  Chapter 32 considers
	  garbage collection for ~fixed-sized ~blocks of memory.  An efficient
	  way to handle large databases, using the disk, is provided in chapter
	  33.	 Chapter 34 shows how to garbage collect ~variable-sized ~blocks.

	  Chapter 35 shows a way to ~organize a large database implemented as
	  disk files.  New and old versions of a database may be efficiently
	  retained simultaneously.

	  Finally, ~incremental ~garbage ~collection is presented to spread
	  out the cost of garbage collection.  If one doesn't use techniques like
	in chapter 33, one relies on ~virtual ~memory to implement large
	databases.  The cost of garbage collections in this context is
	relatively high, but useful.