This article describes a number of phenomena which must be taken into account in order to model aspects of information transmission/reception which occur during natural language processing. It presents a brief comparison with two prominent and similar approaches (Dynamic Syntax and Left-Associative
Grammar) followed by a detailed description of the model proposed—referred to as Discourse Information Grammar or DIG. It further provides an illustration of the model and a brief discussion of its potential applications to computer-assisted language learning, and concludes with a definition of what is understood by “information” in Discourse Information Grammar.