diff --git "a/en/argumentative_zoning_multilingual_en_test.jsonl" "b/en/argumentative_zoning_multilingual_en_test.jsonl" new file mode 100644--- /dev/null +++ "b/en/argumentative_zoning_multilingual_en_test.jsonl" @@ -0,0 +1,2564 @@ +{"text":"Gorrellrestricts this constraint to Primary structural relations ( i.e. dominance and precedence ) , while secondary relations ( e.g. thematic and case dependencies ) are not so constrained .","label":6,"label_text":"OTH"} +{"text":"This section discusses the nature of the structure and states of Paradigme , and also the nature of the similarity computed on it .","label":0,"label_text":"TXT"} +{"text":"The transfer module is free to attempt structural transfer in order to produce the best possible first guess .","label":5,"label_text":"OWN"} +{"text":"The programs make crucial use of the prediction functionin evaluating candidate tone transcriptions .","label":5,"label_text":"OWN"} +{"text":"Whitelock's Shake-and-Bake generation algorithm attempts to arrange the bag of target signs until a grammatical ordering ( an ordering which allows all of the signs to combine to yield a single sign ) is found .","label":6,"label_text":"OTH"} +{"text":"When the length of the affix or ending is 1 the estimation error is not changed sinceis 0 .","label":5,"label_text":"OWN"} +{"text":"In the present context , say for a sequence of 20 tones , the search space containspossible tone transcriptions , and for each of these there are thousands of possible parameter settings , too large a search space for exhaustive search in a reasonable amount of computation time .","label":5,"label_text":"OWN"} +{"text":"A TNCB records dominance information from derivations and is amenable to incremental updates .","label":5,"label_text":"OWN"} +{"text":"Observe , however , that such a bottom-up synthesis of a new unsaturated type is only required , if that type is to be consumed ( as the antecedent of an implication ) by another type .","label":4,"label_text":"BKG"} +{"text":"Normally , we would expect a conversant to notice this contradiction and to drop each of these elementary presuppositions when she interprets.","label":5,"label_text":"OWN"} +{"text":"For every word we computed its metrics exactly as in the previous experiment .","label":5,"label_text":"OWN"} +{"text":"ForHyman's model , this row begins at 0 and is increased by 1 for each downstep encountered .","label":6,"label_text":"OTH"} +{"text":"The grammar must be finitely ambiguous , i.e. , fulfill the off-line parsability constraintShieber 1989.","label":5,"label_text":"OWN"} +{"text":"For each node, compute thicknessof each subrfrantin the following way :","label":5,"label_text":"OWN"} +{"text":"Here is a typical example of tagging a text of 5970 words .","label":5,"label_text":"OWN"} +{"text":"The similarity between words provides a new method for analysing the structure of text .","label":5,"label_text":"OWN"} +{"text":"Ifis in a head-part , letbe doubled .","label":5,"label_text":"OWN"} +{"text":"Furthermore , comparing whole phone letters works better than the more sophisticated technique of comparing features , and restricting comparison to pairs based on the same words does not make the latter any better .","label":5,"label_text":"OWN"} +{"text":"So the isogloss is incomplete and cannot be meaningfully compared with isoglosses based on different sets of sites .","label":1,"label_text":"CTR"} +{"text":"As it stands the formalism is weakly equivalent to a context-free grammar and as such will have problems dealing with phenomena like discontinuous constituents , non-constituent coordination and gapping .","label":5,"label_text":"OWN"} +{"text":"Given that the ANLT and CLARE2.5 grammars have broadly similar ( wide ) coverage and return very similar numbers of syntactic analyses for the same inputs , the significantly better throughput of the three parsers described in this paper over the CLE parser indicates that they do not contain any significant implementational deficiencies which would bias the results .","label":5,"label_text":"OWN"} +{"text":"What seems to come out from these results is that there is not a consistent relationship between the size of the tagsets and the tagging accuracy .","label":5,"label_text":"OWN"} +{"text":"This will produce a set of possible candidate justification chains , and three heuristics will then be applied to select from among them .","label":5,"label_text":"OWN"} +{"text":"The actor will try to work towards a shared domain plan , adding intentions to perform the appropriate speech acts to work towards this goal .","label":5,"label_text":"OWN"} +{"text":"The second split then further refines the weaponry sense into a projectile sense ( cluster 3 ) and a gun sense ( cluster 4 ) .","label":5,"label_text":"OWN"} +{"text":"Other words are tagged using suffix information , or else defaults are invoked .","label":5,"label_text":"OWN"} +{"text":"An informal semantics of the operator can be given by positing a set of rules of behavior R .","label":5,"label_text":"OWN"} +{"text":"This leads to the interpretation of a bound variable as a `` scoped constant '' - it acts like a constant that is not visible from the top of the term , but which becomes visible during the descent through the abstraction .","label":5,"label_text":"OWN"} +{"text":"Conveying a step of the derivation .","label":5,"label_text":"OWN"} +{"text":"In sum , the felicity of both gapping and VP-ellipsis appears to be dependent on the type of coherence relation extant between the source and target clauses .","label":5,"label_text":"OWN"} +{"text":"in the two contexts","label":1,"label_text":"CTR"} +{"text":"A TNCB is composed of a sign , and a history of how it was derived from its children .","label":5,"label_text":"OWN"} +{"text":"Assume that there is a parser that constructs partial tree structures , as recognizing each word from the head sequentially .","label":5,"label_text":"OWN"} +{"text":"Now I shall show how this general equation relates to the equations for IgboLiberman et al. 1993, reproduced below :","label":3,"label_text":"BAS"} +{"text":"As such , it permits direct comparison between competing hypotheses ; that is , the shorter the representation of some hypothesis , the more distributional information can be extracted and , therefore , the better the hypothesis .","label":5,"label_text":"OWN"} +{"text":"This is actually a schema for a family of rules , collectively called `` generalized coordination '' , since the semantic rule is different for each case .","label":6,"label_text":"OTH"} +{"text":"Figureshows an example of the results , which is part of the case frame pattern ( dendroid distribution ) for the verb ` buy .","label":5,"label_text":"OWN"} +{"text":"Previous spelling programs , unless restricted to a very small set of words , have operated as post-processors .","label":6,"label_text":"OTH"} +{"text":"I used two different methods of comparison ,Pearson'scomputed between all corresponding cells in the two matrices , and","label":5,"label_text":"OWN"} +{"text":"Subsectionpresents a simple two-level grammar which describes the above data .","label":0,"label_text":"TXT"} +{"text":"is non-constituent coordination under the primary reading where the scope of former does not contain living in England i.e. where the semantic bracketing is :","label":4,"label_text":"BKG"} +{"text":"The error rules are in two level format and integrate seamlessly into morphological analysis .","label":5,"label_text":"OWN"} +{"text":"We verify , again via balancedness of the primitive counts , thatholds , because these are the numbers of positive and negative occurrences ofin the sequent .","label":5,"label_text":"OWN"} +{"text":"whereandare the sum of weighted activity ( at time T ) of the nodes referred in the rfrant and rfr respectively .","label":5,"label_text":"OWN"} +{"text":"Instead it is a simple matter of using the meta-level- reduction to eliminate- redexes to produce the final result.","label":5,"label_text":"OWN"} +{"text":"For the rules with the affix or ending length of 2 the estimation error is reduced by, for the length 3 this will be, etc .","label":5,"label_text":"OWN"} +{"text":"This allows the grammar to constrain the applicability of context-free rules .","label":6,"label_text":"OTH"} +{"text":"The first is to introduce an explicit product operatorWood 1988, allowing types of the form NP * NP .","label":6,"label_text":"OTH"} +{"text":"A similar problem concerns examples such as the following , fromGibson et al. 1993.","label":5,"label_text":"OWN"} +{"text":"I call each member of U an object in I .","label":5,"label_text":"OWN"} +{"text":"The lattice shows a partial order that is defined over the different levels of truth .","label":5,"label_text":"OWN"} +{"text":"The goal of text categorization is to tag texts with the name of the language in which they are written .","label":4,"label_text":"BKG"} +{"text":"The splitting procedure can then be repeated to achieve the desired number of clusters or model cross-entropy .","label":5,"label_text":"OWN"} +{"text":"We show that the parsing problem for semidirectional Lambek Grammar is NP-complete by a reduction of the 3-Partition problem .","label":2,"label_text":"AIM"} +{"text":"The figures include garbage collection time , and phrasal ( where appropriate ) processing , but not parse forest unpacking .","label":5,"label_text":"OWN"} +{"text":"The algorithm to learn SRs is based in a search through all the classes with more instances in the training set than the given threshold .","label":5,"label_text":"OWN"} +{"text":"Then ,","label":5,"label_text":"OWN"} +{"text":"It turns out that the problem is avoided by our clustering technique , since it does not need to compute the KL distance between individual word distributions , but only between a word distribution and average distributions , the current cluster centroids , which are guaranteed to be nonzero whenever the word distributions are .","label":1,"label_text":"CTR"} +{"text":"The relationship between the neural net and the rules in the prohibition table should be seen in the following way .","label":5,"label_text":"OWN"} +{"text":"Currently , there are some efforts in statistical lexical selection .","label":6,"label_text":"OTH"} +{"text":"For the word-based or class-based models , case slots are judged independent , with the data size currently available in the Penn Tree Bank .","label":5,"label_text":"OWN"} +{"text":"We will briefly discuss these models , before we describe our own .","label":0,"label_text":"TXT"} +{"text":"We have developed an automatic abstract generation system for Japanese expository writings based on rhetorical structure extraction .","label":2,"label_text":"AIM"} +{"text":"Parameters of the system will be varied , for example the breadth of the purview , the position of the purview focus , the number of correction candidates and the timing of their generation .","label":5,"label_text":"OWN"} +{"text":"The dead air hypothesis would seem to rely on an assumption that at unpredictable intervals , agents just can't think very well .","label":5,"label_text":"OWN"} +{"text":"Resuming the description of the grammar ,presents spreading rules .","label":5,"label_text":"OWN"} +{"text":"In fact , using this strategy one could see if the highest ranked proposal passed all the filters , or if the next highest did , etc .","label":5,"label_text":"OWN"} +{"text":"Note that this observation holds for the model in general , not just for the specialised version of the model as applied to Bamileke Dschang .","label":5,"label_text":"OWN"} +{"text":"Each of these says that I didn't steal your car , but again they each carry some extra message .","label":4,"label_text":"BKG"} +{"text":"Apart from anything else , ` representative ' is hard to decide - spectrum of errors or distribution of errors ?","label":5,"label_text":"OWN"} +{"text":"A weaker kind of cache on partial analyses ( and thus unification results ) was found to be necessary in the implementation , though , to avoid duplication of unifications ; this sped the parser up by a factor of about three , at little space cost .","label":6,"label_text":"OTH"} +{"text":"Another important consideration for scoring a word-guessing rule is that the longer the affix or ending of the rule the more confident we are that it is not a coincidental one , even on small samples .","label":5,"label_text":"OWN"} +{"text":"Reordering of RHS clauses will result in code which precedes a head within a LHIP rule being evaluated after it ; judicious freezing of goals and avoidance of unsafe cuts are therefore required .","label":5,"label_text":"OWN"} +{"text":"Importantly , the use of a window provides a natural means of trading off the amount of data against its quality .","label":6,"label_text":"OTH"} +{"text":"The actual definition for link length will be given later .","label":6,"label_text":"OTH"} +{"text":"The program performs this kind of crossover for the parameters h , l and d , employing independent crossover points for each , and randomising the argument order inso that the high order bits in the offspring are equally likely to come from either parent .","label":6,"label_text":"OTH"} +{"text":"The discourse carried out so far is recorded in a discourse model .","label":5,"label_text":"OWN"} +{"text":"We pay our attention to the general structure of Japanese utterance which is helpful to represent semantics of complex sentence .","label":4,"label_text":"BKG"} +{"text":"is prohibited , along with the other extractions that do not remove c from the body of the abstraction .","label":5,"label_text":"OWN"} +{"text":"Moreover , the resulting f-structure respects the LFG well formedness conditions , namely the uniqueness , completeness and coherence principles discussed in section.","label":4,"label_text":"BKG"} +{"text":"Furthermore , it enables a reduction of the number of edges that need to be stored through unfolding magic predicates .","label":5,"label_text":"OWN"} +{"text":"This bears directly on the problems of brittleness and complexity that discrete approaches to language processing share with , for example , reasoning systems based on traditional logical inference .","label":4,"label_text":"BKG"} +{"text":"However , some erroneous classes may persist because they exceed the threshold .","label":5,"label_text":"OWN"} +{"text":"These noun phrases cannot be generated correctly by the purely heuristic methods proposed here .","label":5,"label_text":"OWN"} +{"text":"The random choice of a presentation order for the data meant that different clusterings were arrived at on each run for a given condition ( ( N , K ) for N-grams and K clusters ) .","label":5,"label_text":"OWN"} +{"text":"It will be helpful to introduce one more level of generality .","label":5,"label_text":"OWN"} +{"text":"If the guessed POS - set is the same as the POS - set stated in the lexicon , we count it as success , otherwise it is failure .","label":5,"label_text":"OWN"} +{"text":"We only argue that ignoring it at the level of part-of-speech tagging has no measurable effect on the overall quality of the tagger .","label":5,"label_text":"OWN"} +{"text":"This means that , assuming we decide initially to attach low , but number agreement on was subsequently forces high attachment , as in, then a conscious garden path effect will be predicted , as lowering cannot derive the reanalysis .","label":5,"label_text":"OWN"} +{"text":"Previous comparisons have either focussed on context-free ( CF ) or augmented CF parsingTomita 1987,Billot and Lang 1989, or have used relatively small , limited-coverage unification grammars and lexiconsShann 1989,Bouma and van Noord 1993,Maxwell and Kaplan 1993.","label":1,"label_text":"CTR"} +{"text":"For the problem with multi-sentence discourses , and the `` threads '' that sentences continue , we use an implementation of temporal centeringKameyama et al. 1993,Poesio 1994.","label":3,"label_text":"BAS"} +{"text":"Finally , here is a summary of our reasons for combining statistical methods with dependency representations in our language and translation models :","label":5,"label_text":"OWN"} +{"text":"After sentences of each paragraph are reduced , inter-paragraph structure reduction is carried out in the same way based on the relative importance judgement on the inter-paragraph rhetorical structure .","label":5,"label_text":"OWN"} +{"text":"As a final note , we consider the interaction between VP-ellipsis and gapping .","label":5,"label_text":"OWN"} +{"text":"Tableshows 37 of the verbs whose dependencies between arg 2 and other case slots are positive and exceed a certain threshold , i.e. P ( arg 2 = 1 , prep = 1 ) # GT 0.25 .","label":5,"label_text":"OWN"} +{"text":"One example is the processing of an unknown sequence of words , e.g. in case there is noise in the input and it is not clear how many words have been uttered during this noise .","label":5,"label_text":"OWN"} +{"text":"In some occasions , if multiple verbs exist in one sentence , they may conflict as to which verb dominates which noun phrase .","label":5,"label_text":"OWN"} +{"text":"Now ifis a cutnode , thenmust also be a cutnode even if it has a low entropy value .","label":5,"label_text":"OWN"} +{"text":"remain as mentioned in section.","label":5,"label_text":"OWN"} +{"text":"However , thesauri provide neither information about semantic difference between words juxtaposed in a category , nor about strength of the semantic relation between words -- both are to be dealt in this paper .","label":1,"label_text":"CTR"} +{"text":"The same is true for whatHumecalled Contiguity relations ( perhaps includingHobbs's Occasion and Figure-ground relations ) ; for the purpose of this paper we will consider these as weaker cases of Cause or Effect .","label":5,"label_text":"OWN"} +{"text":"The simplest of these is where the type of argument expected by the state is matched by the next word i.e.","label":5,"label_text":"OWN"} +{"text":"Creating and managing items for these proofs is too much of a computational overhead , and , moreover , a proof may not terminate in the bottom-up case because infinitely many consequences may be derived from the base case of a recursively defined relation .","label":5,"label_text":"OWN"} +{"text":"As mentioned above , focal centers are semantic objects mentioned in the proof node which is the local focus .","label":5,"label_text":"OWN"} +{"text":"This will not be predicted by the unselective binding of quantifiers in DRT , which quantify over all the free variables in their scope , in this case women-cat pairs .","label":1,"label_text":"CTR"} +{"text":"The conclusions are broadly in agreement with those ofMerialdo 1994, but give greater detail about the contributions of different parts of the model .","label":1,"label_text":"CTR"} +{"text":"In addition , the back-off model does not require complex estimations for interpolation parameters .","label":6,"label_text":"OTH"} +{"text":"The limit on the search radius defines the capacity of attention \/ working memory and hence defines which stored beliefs and intentions are SALIENT .","label":6,"label_text":"OTH"} +{"text":"Furthermore , certain techniques for robust parsing can be modelled as finite state transducers .","label":6,"label_text":"OTH"} +{"text":"PROVERB combines the two above mentioned presentation modes by encoding communication knowledge for both top-down planning and bottom-up presentation in form of operators in a uniform planning framework .","label":5,"label_text":"OWN"} +{"text":"The restricted combinatorial potential of the collocate lexeme is accounted for by listing it at each base with which it can occur .","label":6,"label_text":"OTH"} +{"text":"Let word, and each sense have the following context examples .","label":6,"label_text":"OTH"} +{"text":"The argument corresponds to the base and each value is a collocate .","label":6,"label_text":"OTH"} +{"text":"The SEM feature of ``P-garu \/ gat-ta '' is the following .","label":5,"label_text":"OWN"} +{"text":"The following is an example of a LHIP rule .","label":5,"label_text":"OWN"} +{"text":"Similarly , in many tone languages , voice pitch alone signals the tense of a verb .","label":4,"label_text":"BKG"} +{"text":"For example , the CCG category for a transitive verbwould be represented as.","label":5,"label_text":"OWN"} +{"text":"Their approach is attractive , because it permits a computational treatment of dynamism that abstracts from low level algorithmic details .","label":5,"label_text":"OWN"} +{"text":"For example , `` fun '' in `` It 's a fun thing to do . '' has properties of both a noun and an adjective ( superlative `` funnest '' possible ) .","label":5,"label_text":"OWN"} +{"text":"A decision-tree model can be represented by an interpolated n-gram model as follows .","label":5,"label_text":"OWN"} +{"text":"In comparison with the Xerox word-ending guesser taken as the base-line model we detect a substantial increase in the precision by about 22 % and a cheerful increase in coverage by about 6 % .","label":1,"label_text":"CTR"} +{"text":"We introduce the bilingual dual-coding theory as a model for bilingual mental representation .","label":2,"label_text":"AIM"} +{"text":"When we added new tags , say PREP-DE and PREP-A , for the specific prepositions while the other prepositions remained marked with PREP , we got the correct result , with no noticeable change in overall accuracy .","label":5,"label_text":"OWN"} +{"text":"Previous proposals to circumvent the above problemGood 1953,Jelinek et al. 1992,Katz 1987,Church and Gale 1991take the MLE as an initial estimate and adjust it so that the total probability of seen bigrams is less than one , leaving some probability mass for unseen bigrams .","label":6,"label_text":"OTH"} +{"text":"where w is a source language word and M is a multiset of target language words .","label":5,"label_text":"OWN"} +{"text":"These studies demonstrated infants ' perceptive abilities without demonstrating the usefulness of infants ' perceptions .","label":1,"label_text":"CTR"} +{"text":"The configuration of four R values that we find whenis not downstepped or upstepped ( the first two columns ) is reproduced in the columns for downstep ( multiplied by d ) and in the columns for upstep ( divided by d ) .","label":5,"label_text":"OWN"} +{"text":"In speech recognition and understanding systems , many kinds of language model may be used to choose between the word and sentence hypotheses for which there is evidence in the acoustic data .","label":4,"label_text":"BKG"} +{"text":"We do so as follows :","label":5,"label_text":"OWN"} +{"text":"A grammar is lexicalized if for every local syntax tree there is at least one preterminal leaf , cf.Schabes and Waters 1993.","label":5,"label_text":"OWN"} +{"text":"For instance , the distance is one , if a modifier and a modifiee are immediately adjacent .","label":5,"label_text":"OWN"} +{"text":"Such conundrums ledParisand others to conclude that the dialect boundary , and therefore the very notion of dialect , is an ill-defined concept .","label":1,"label_text":"CTR"} +{"text":"The actor will try to make it mutually believed ( or grounded ) whether particular speech acts have been performed .","label":5,"label_text":"OWN"} +{"text":"Since TNCBs are tree-like structures , if a TNCB is undetermined or ill-formed then so are all of its ancestors ( the TNCBs that contain it ) .","label":5,"label_text":"OWN"} +{"text":"Restating the second problem noted in Section, if temporal relations can be recovered solely from reasoning with coherence relations , and the use of the simple past in passageis as felicitous as the past perfect in passageunder the Explanation interpretation , then one asks why passageis not understood as an Explanation as is passage, where in each case the relationship needs to be inferred .","label":1,"label_text":"CTR"} +{"text":"codifies a similar observation that there are no free-standing structures with type phrase .","label":5,"label_text":"OWN"} +{"text":"We associate probabilities with partial phrase markers , which are sets of terminal and non-terminal nodes generated by beginning from the starting node successively expanding non-terminal leaves of the partial tree .","label":6,"label_text":"OTH"} +{"text":"Unlike many other flavours of two-level morphology , the Target parts of a rule need not consist of a single character ( or class occurrence ) ; they can contain more than one , and the surface target may be empty .","label":5,"label_text":"OWN"} +{"text":"Inferring this is only reliant on the sentential-level semantics for the clauses as a whole ; there are no p ,, orto be independently identified .","label":5,"label_text":"OWN"} +{"text":"However , we haven't solved the problem completely at this point : although tense can provide a further constraint on the temporal structure of such discourses , it can also add a further ambiguity .","label":5,"label_text":"OWN"} +{"text":"We have extended thePenn and Carpenterimplementation of the HPSG grammar so that semantic aspect is calculated compositionally ( and stored here ) .","label":5,"label_text":"OWN"} +{"text":"This type of reference is different from the type that has been studied traditionally by researchers who have usually assumed that the agents have mutual knowledge of the referentAppelt 1985a,Appelt and Kronfeld 1987,Clark and Wilkes-Gibbs 1986,Heeman and Hirst 1992,Searle 1969, are copresent with the referentHeeman and Hirst 1992,Cohen 1981, or have the referent in their focus of attentionReiter and Dale 1992.","label":1,"label_text":"CTR"} +{"text":"It has also been the subject of a number of psycholinguistic studies on a more theoretical levelPritchett 1992,Gorrell in press.","label":6,"label_text":"OTH"} +{"text":"If this latter strategy is used in our example , this will give us the two extra rules of Figure.","label":5,"label_text":"OWN"} +{"text":"However , a drawback to the on-line algorithm is that a variant ofKipps's caching cannot be used , since the cache must necessarily assume that all reductions at a given vertex with all rules with the same number of daughters build exactly the same constituent every time ; in general this is not the case when the daughters are unification categories .","label":6,"label_text":"OTH"} +{"text":"There can be topic shifts without change of initiation , change of control without a topic shiftWhittaker and Stenton 1988.","label":5,"label_text":"OWN"} +{"text":"This technique addresses as follows the three drawbacks just alluded to .","label":5,"label_text":"OWN"} +{"text":"So , we assign less error value to the deletion-error hypothesis edge than to the insertion - and mutation-errors .","label":5,"label_text":"OWN"} +{"text":": target language generation .","label":5,"label_text":"OWN"} +{"text":"It is then possible to specify operations which act as purely applicative operations with respect to the left and right arguments lists , but more like composition with respect to the wh-list .","label":5,"label_text":"OWN"} +{"text":"The first heuristic will cause the system to select Postponed-Sabbatical(Smith,1997) and supports(Postponed-Sabbatical(Smith,1997) ,On-Sabbatical(Smith,next year)) as support , since it is the evidence in which the system is more confident .","label":5,"label_text":"OWN"} +{"text":"The results imply that the study and optimisation of unification-based parsing must rely on empirical data until complexity theory can more accurately predict the practical behaviour of such parsers .","label":5,"label_text":"OWN"} +{"text":"The names of senses were chosen from the category names in Roget 's International Thesaurus , except organ 's .","label":5,"label_text":"OWN"} +{"text":"a language for assembling meanings , or glue language .","label":4,"label_text":"BKG"} +{"text":"Therefore , a corpus size of 20 M words is not too small .","label":5,"label_text":"OWN"} +{"text":"Note also that the use of the same bound variable names obj and sub causes no difficulty since the use of scoped-constants , meta-level- reduction , and higher-order unification is used to access and manipulate the inner terms .","label":5,"label_text":"OWN"} +{"text":"Then we would like a formula which predictsgiven,and( i > 1 ) .","label":5,"label_text":"OWN"} +{"text":"Natural Language Generation , i.e. , the process of building an adequate utterance for some given content , is by nature a decision-making problemAppelt 1985c.","label":4,"label_text":"BKG"} +{"text":"We found that the MDL-based method performs better than the MLE-based method .","label":1,"label_text":"CTR"} +{"text":"Another important conclusion from the evaluation experiments is that the morphological guessing rules do improve the guessing performance .","label":5,"label_text":"OWN"} +{"text":"Given an unaccepted belief ( _bel ) and the beliefs proposed to support it , Select-Focus-Modification will annotate _bel with","label":5,"label_text":"OWN"} +{"text":"TheIntFilterProject at the departments of Computer and Systems Sciences , Computational Linguistics , and Psychology at Stockholm University is at present studying texts on the USENET News conferencing system .","label":6,"label_text":"OTH"} +{"text":"The similarity between the word lists W , W ' is defined as follows .","label":5,"label_text":"OWN"} +{"text":"It is necessary to locate the subject , then identify the head and determine its number in order to translate the main verb correctly in sentences likebelow .","label":5,"label_text":"OWN"} +{"text":"The appropriate rule in CG notation would be :","label":5,"label_text":"OWN"} +{"text":"In order to provide an independent basis for comparison , the same sentences were also input to the SRI Core Language Engine ( CLE ) parserMoore and Alshawi 1992with the CLARE2.5 grammarAlshawi et al. 1992, a state-of-the-art system accessible to the author .","label":5,"label_text":"OWN"} +{"text":"Restating the first problem noted in Section, if treating the simple past as anaphoric is used to account for the forward progression of time in passage (, then one would expect the existence of the Explanation relation in passageto cause a temporal clash , where in fact passageis perfectly felicitous .","label":1,"label_text":"CTR"} +{"text":"A discourse module might combine theories on , e.g. , centering or local focusingGrosz et al. 1983,Sidner 1979, global focusGrosz 1977, coherence relationsHobbs 1985, event referenceWebber 1986, intonational structurePierrehumbert and Hirschberg 1990, system vs. user beliefsPollack 1986, plan or intent recognition or productionCohen 1978,Allen and Perrault 1980,Sidner and Israel 1981, controlWhittaker and Stenton 1988, or complex syntactic structuresPrince 1985.","label":6,"label_text":"OTH"} +{"text":"Hereis a form of abstraction ; for now it will do no harm view it as a form of- abstraction , though this is not strictly accurate .","label":5,"label_text":"OWN"} +{"text":"An SDL-grammar is defined exactly like a Lambek grammar , except thatreplaces.","label":5,"label_text":"OWN"} +{"text":"Recognition for such ` off-line parsable ' grammars is decidablePereira and Warren 1983.","label":6,"label_text":"OTH"} +{"text":"For example a noun modified by onoono-no `` each '' is denumerated - singular , while one modified by ryouhou-no `` both '' is denumerated - plural .","label":5,"label_text":"OWN"} +{"text":"The case for probability theory is strengthened by a well developed empirical methodology in the form of statistical parameter estimation .","label":4,"label_text":"BKG"} +{"text":"If directly attacking _bel is also predicted to fail , the algorithm considers the effect of attacking both _bel and its unaccepted proposed evidence by combining the previous two prediction processes ( step) .","label":5,"label_text":"OWN"} +{"text":"Singular and plural forms are counted as the same noun , and nouns not covered by WordNet are ignored .","label":5,"label_text":"OWN"} +{"text":"In this paper , we assume that the observed data are generated by a model belonging to the class of models just described , and select a model which best explains the data .","label":5,"label_text":"OWN"} +{"text":"The second part of Figureshows declares how quantifiers are represented , which are required since the sentences to be processed may have determiners .","label":5,"label_text":"OWN"} +{"text":"Our techniques apply to the feature structures described byCarpenter 1992.","label":3,"label_text":"BAS"} +{"text":"For the following , we will focus on CoL verbs ( the Change of Location verbs ) , mainly because they are rich in spatiotemporal informations , but also because we have at disposal exhaustive lists of French CoL verbs .","label":5,"label_text":"OWN"} +{"text":"In all experiments the dependency model provides a substantial advantage over the adjacency model , even though the latter is more prevalent in proposals within the literature .","label":1,"label_text":"CTR"} +{"text":"Dealing with quantifiers incrementally is a rather similar problem to dealing with fragments of trees incrementally .","label":4,"label_text":"BKG"} +{"text":"But , ( unrestricted ) ` before ' is analyzed as ` some time before ' , and thus the problem arises .","label":6,"label_text":"OTH"} +{"text":"Finally , I wish to thank the anonymous reviewers for their comments .","label":5,"label_text":"OWN"} +{"text":"All of the assumptions start out supported by no evidence ; their evidence type is therefore hypothesis .","label":5,"label_text":"OWN"} +{"text":"On the basis of the usefulness of probabilistic context-free grammarsCharniak 1993, it is plausible to assume that that the extension of probabilistic techniques to such structures will allow the application of known and new techniques of parse ranking and grammar induction to more interesting grammars than has hitherto been the case .","label":4,"label_text":"BKG"} +{"text":"B may believe that p does not contribute to the common goal ,","label":5,"label_text":"OWN"} +{"text":"Preference judgement selects the structure candidate with the lowest penalty score , a value determined based on preference rules on every two neighboring relations in the candidate .","label":5,"label_text":"OWN"} +{"text":"extended-COMPLETER","label":5,"label_text":"OWN"} +{"text":"equilibrium","label":6,"label_text":"OTH"} +{"text":"We are trying to find the antecedent for her in the second utterance .","label":6,"label_text":"OTH"} +{"text":"Therefore , the accuracy of segmentation candidates are 99 % ( 943 \/ 954 ) , 94.5 % ( 671 \/ 710 ) and 98.1 % ( 772 \/ 787 ) respectively .","label":5,"label_text":"OWN"} +{"text":"However , preliminary experiments using such measures as the Kullback-Liebler distance between the initial and new models have again showed that it does not give good predictions of accuracy .","label":5,"label_text":"OWN"} +{"text":"If needed , we can do another lexicon lookup for words that have the tag VERB-SG-P 3 and assign a tense to them after the disambiguation .","label":5,"label_text":"OWN"} +{"text":"LFG assumes two syntactic levels of representation : constituent structure ( c-structure ) encodes phrasal dominance and precedence relations , and functional structure ( f-structure ) encodes syntactic predicate-argument structure .","label":4,"label_text":"BKG"} +{"text":"Just as quantification over possible states of affairs yields analyses of intensional phenomena , so quantification over related models could provide a ` denotational semantics ' for.","label":5,"label_text":"OWN"} +{"text":"If the thread currently being followed is among the highest rated threads , this thread is continued .","label":5,"label_text":"OWN"} +{"text":"Most CGs either choose the third of these ( to give a vp structure ) , or include a rule of Associativity which means that the types are interchangeable ( in the Lambek Calculus , Associativity is a consequence of the calculus , rather than being specified separately ) .","label":6,"label_text":"OTH"} +{"text":"Agents are parametrized for different discourse strategies by placing different expansions of discourse plans in their plan libraries .","label":5,"label_text":"OWN"} +{"text":"A strict substitution substitutes the term by its index .","label":5,"label_text":"OWN"} +{"text":"In our current problem , a simple model means a model with less dependencies , and thus MDL provides a theoretically sound way to learn only those dependencies that are statistically significant in the given data .","label":5,"label_text":"OWN"} +{"text":"But why should deixis and event anaphors behave differently from the other anaphors ?","label":5,"label_text":"OWN"} +{"text":"Since there are 3 plural sentences and only 2 singular sentences , the optimal set of parameters will reflect the distribution found in the corpus , as shown in figure.","label":6,"label_text":"OTH"} +{"text":"Secondly , by setting a threshold value of 1 , LHIP can be made to perform like a standardly interpreted Prolog DCG , though somewhat more efficiently due to the use of the chart .","label":5,"label_text":"OWN"} +{"text":"Otherwise , the normal parser fails , and then the robust parser starts to execute with edges generated by the normal parser .","label":5,"label_text":"OWN"} +{"text":"conservatism in order to be able to generalize only from positive examples , without having the tendency to over-generalize .","label":5,"label_text":"OWN"} +{"text":"It is common among learners to make mistakes such as *\/kteb\/ or.","label":4,"label_text":"BKG"} +{"text":"PROMPTS : Utterances which did not express propositional content , such as Yeah , Okay , Uh-huh .","label":6,"label_text":"OTH"} +{"text":"Certain researchers in the psycholinguistic communityPritchett 1992,Gorrell in press, have argued for a binary distinction between two distinct types of garden path sentences .","label":6,"label_text":"OTH"} +{"text":"tense is resolved indefinitely with respect to a possibly anaphorically-resolved discourse reference time , and","label":2,"label_text":"AIM"} +{"text":"The tag `` PRD '' stands for predicative uses of adjectives .","label":5,"label_text":"OWN"} +{"text":"Consider the grammar given in Figure.","label":4,"label_text":"BKG"} +{"text":"Now , since the determiner and adjectives all modify the same noun , most grammars will allow us to construct the phrases :","label":5,"label_text":"OWN"} +{"text":"Lines 2 - 3 of the algorithm in Figureindicate that the actor 's first priority is fulfilling obligations .","label":5,"label_text":"OWN"} +{"text":"A general trend of rising accuracy on each iteration , with any falls in accuracy being local .","label":5,"label_text":"OWN"} +{"text":"This group was among classes hand-selected byBrown et al.as `` particularly interesting . ''","label":6,"label_text":"OTH"} +{"text":"The lexicon is identical to that for a standard AACG , except for having h-lists which are always set to empty .","label":5,"label_text":"OWN"} +{"text":"Finally , our account readily applies to cases of intensional verbs without coordination as in example, since it applies more generally to cases of resource sharing .","label":5,"label_text":"OWN"} +{"text":"The [ evidence-type ] annotation indicates the strength of evidence supporting the assumption .","label":5,"label_text":"OWN"} +{"text":"Meaning of a text lies in the texture of paradigmatic and syntagmatic relations between wordsHjelmslev 1943.","label":5,"label_text":"OWN"} +{"text":"Polysemous words are represented as instances of different classes .","label":6,"label_text":"OTH"} +{"text":"Firstly , two of the tests , D2 + T1 and D3 + T1 , give very poor performance .","label":5,"label_text":"OWN"} +{"text":"The use of such a semantic filter in bottom-up evaluation requires the grammar to obey the semantic monotonicity constraint in order to ensure completenessShieber 1988( see below ) .","label":6,"label_text":"OTH"} +{"text":"There are several possibilities .","label":5,"label_text":"OWN"} +{"text":"The scanning step achieves a certain degree of goal-directedness for bottom-up algorithms because only those clauses which can appear as leaves in the proof tree of the goal are added to the chart .","label":5,"label_text":"OWN"} +{"text":"If no single justification chain is predicted to be sufficient to change the user 's beliefs , new sets will be constructed by combining the single justification chains , and the selection process is repeated .","label":5,"label_text":"OWN"} +{"text":"Another possible way to infer the sense is to choose sensesuch that the average ofoveris maximum .","label":6,"label_text":"OTH"} +{"text":"Within two-person interactive dialogues , there are the task-oriented master-slave type , where all the expertise and hence much of the initiative , rests with one person .","label":5,"label_text":"OWN"} +{"text":"( Thus any difference in performance of more than around 15 % is likely to stem from algorithmic rather than implementational reasons ) .","label":5,"label_text":"OWN"} +{"text":"Therefore generally we don't expect essential information for relations among semantic roles appearing in adverbial or main clause from this type of sentence .","label":5,"label_text":"OWN"} +{"text":"Analyzing compound nouns is one of the crucial issues for natural language processing systems , in particular for those systems that aim at a wide coverage of domains .","label":4,"label_text":"BKG"} +{"text":"There are at least two ways in which each mother 's probabilities can be calculated ; firstly , the probability information of the same type can be used : the daughters ' X-bar probabilities alone could be used in calculating the mother 's X-bar probability .","label":5,"label_text":"OWN"} +{"text":"Finally it showed how dynamics can be used as a formal description of processing accounts which use a full parsing history , and how the characterisations of parsing states can be chosen to enforce the requisite degree of parallelism between conjuncts .","label":2,"label_text":"AIM"} +{"text":"We assume that there is a one-to-one map from the nonterminal leaf nodes of the ( local ) syntax tree on the leaf nodes of the ( local ) semantic derivation tree .","label":4,"label_text":"BKG"} +{"text":"So , one can compare default handling with advice to the system .","label":4,"label_text":"BKG"} +{"text":"To which extent it is useful to collapse magic predicates using unfolding depends on whether the grammar has been optimized through reordering the right-hand sides of the rules in the grammar as discussed in section.","label":5,"label_text":"OWN"} +{"text":"Given that the discourse inference mechanisms retrieve semantic forms through nodes in the syntax , this syntax will have to be recovered when a node being accessed is missing .","label":5,"label_text":"OWN"} +{"text":"SRs must include information on the syntactic position of the words that are being restricted semantically .","label":4,"label_text":"BKG"} +{"text":"The biases in the model consisted for the most part in specifications of the most plausible successor tags for each tag in the tag set .","label":5,"label_text":"OWN"} +{"text":"crossover","label":6,"label_text":"OTH"} +{"text":"This work was supported in part by National Science Foundation Grant IRI - 9009018 , National Science Foundation Grant IRI - 9350192 , and a grant from the Xerox Corporation .","label":5,"label_text":"OWN"} +{"text":"This paper explores the application of corpus statisticsCharniak 1993to noun compound parsing ( other computational problems are addressed inArens et al. 1987,Vanderwende 1993andSproat 1994) .","label":2,"label_text":"AIM"} +{"text":"For example , the sentence `` every man found a bone '' has as a possible LF, with the-Prolog representation:","label":5,"label_text":"OWN"} +{"text":"Letbe the node referred by, and letbe thickness of.","label":5,"label_text":"OWN"} +{"text":"When the input contains a pre-determined commonly substituted word , the parser attempts to continue with both the original input word and a specified `` correct '' word .","label":6,"label_text":"OTH"} +{"text":"We also used the acquired dependency knowledge in a pp-attachment disambiguation experiment .","label":5,"label_text":"OWN"} +{"text":"This joint optimization searches for a saddle point in the distortion-entropy parameters , which is equivalent to minimizing a linear combination of the two known as free energy in statistical mechanics .","label":5,"label_text":"OWN"} +{"text":"For each symbol that occurs on the right-hand side of a rule but which was not one of the most frequent 20 symbols , we create a rule that expands that symbol to a unique terminal symbol .","label":5,"label_text":"OWN"} +{"text":"They assumed compound nouns consist of only one character words and two character words .","label":1,"label_text":"CTR"} +{"text":"The data for this test was built from the training data for the previous one in the following way , based on a suggestion byDagan et al. 1993.","label":3,"label_text":"BAS"} +{"text":"In order to illustrate how ordinary parsers can be used to compute the intersection of a FSA and a CFG consider first the definite-clause specification of a top-down parser .","label":5,"label_text":"OWN"} +{"text":"We then formalize the dependencies between case slots as the probabilistic dependencies between these random variables .","label":5,"label_text":"OWN"} +{"text":"This paper gives an overview of how natural language is converted to a representation that the neural nets can handle , and how the problem is reduced to a manageable size .","label":5,"label_text":"OWN"} +{"text":"However , we may also want a model for, for example for pruning speech recognition hypotheses .","label":5,"label_text":"OWN"} +{"text":"LHIP is based on the assumption that partial results can be useful ( often much more useful than no result at all ) , and that an approximation to complete coverage is more useful when it comes with indications of how approximate it is .","label":5,"label_text":"OWN"} +{"text":"We opt for a particularly simple specification language : a propositional language enriched with operators for talking about c - and f-structures , together with a path equality construct for enforcing synchronisation between the two domains .","label":5,"label_text":"OWN"} +{"text":"For each piece of evidence that could be used to directly support _bel , the system first predicts whether the user will accept the evidence without justification .","label":5,"label_text":"OWN"} +{"text":"In this paper we present a technique for fully unsupervised statistical acquisition of rules which guess possible parts-of-speech for unknown words .","label":2,"label_text":"AIM"} +{"text":"The event marker -is introduced in the antecedent box , with the condition that it be temporally included in the current reference time ,and be prior to n .","label":6,"label_text":"OTH"} +{"text":"It is also used to refer to sets of propositions of the preceding discourse , Now THAT 'S a little backgroundWebber 1988.","label":5,"label_text":"OWN"} +{"text":"Though the form in which the vector is written may give an illusion of representing order , no sequential order is maintained .","label":5,"label_text":"OWN"} +{"text":"From this or-node we follow an arc labelled Id , or add a new one if there is none .","label":5,"label_text":"OWN"} +{"text":"In effect , an arbitrary permutation of signs is input to a shift-reduce parser which tests them for grammatical well-formedness .","label":6,"label_text":"OTH"} +{"text":"Two membranes can interact when they contact with the notation `' , as.","label":5,"label_text":"OWN"} +{"text":"In our ALE implementation , a DCU contains the following slots for temporal information :","label":5,"label_text":"OWN"} +{"text":"TEMP_CENTER : Used for temporal centering ; Keeps track of the thread currently being followed ( since there is a preference for continuing the current thread ) and all the threads that have been constructed so far in the discourse .","label":5,"label_text":"OWN"} +{"text":"We also looked at the TODs for instances of anaphora being used to describe a future act in the way that we observed in the ADs. However , over the 938 turns in the TODs , there were only 18 instances of event anaphora , because in the main there were few occasions when it was necessary to talk about the plan .","label":5,"label_text":"OWN"} +{"text":"The the evaluation function is as follows :","label":6,"label_text":"OTH"} +{"text":"A number of built-in predicates are provided which allow the user to constrain the behaviour of the parser in various ways , based on the notions of coverage , span and threshold :","label":5,"label_text":"OWN"} +{"text":"Let these probabilities befor i = 1 , ... , N for the N possible next words, i.e..","label":4,"label_text":"BKG"} +{"text":"We obtained significant improvement in 5 kanzi and 6 kanzi collection .","label":5,"label_text":"OWN"} +{"text":"The main advantage of the latter is that they provide experimental evidence of words uses .","label":4,"label_text":"BKG"} +{"text":"We remark that a change of language in a text could appear at each change of sentence ( more often paragraph ) or in each included segment via quotes , parenthesis , dashes or colons .","label":4,"label_text":"BKG"} +{"text":"Explicit relation markers such as cue words and temporal relations must be consistent and take priority over indicators such as tense and aspect .","label":5,"label_text":"OWN"} +{"text":"In Tablewe show the constraints that say which semantic \/ pragmatic role of subordinate clause can be a motivated .","label":5,"label_text":"OWN"} +{"text":"This fact is predicted by our account in the following way .","label":5,"label_text":"OWN"} +{"text":"The possibilities for rhetorical relations ( e.g. , whether something is narration , or elaboration , or a causal relation ) can be further constrained by aspect .","label":5,"label_text":"OWN"} +{"text":"We need to distinguish between parallel and non-parallel terms in ellipsis antecedents .","label":5,"label_text":"OWN"} +{"text":"The results support two main conclusions .","label":5,"label_text":"OWN"} +{"text":"The precision is the simple average of the respective precisions for the two senses .","label":5,"label_text":"OWN"} +{"text":"We assume a simple greedy search strategy .","label":5,"label_text":"OWN"} +{"text":"The only transition inthat differs from that of the corresponding word in the ` core ' variantis that of ` dog ' which has the respective transitions :","label":5,"label_text":"OWN"} +{"text":"Finally ,is calculated by","label":5,"label_text":"OWN"} +{"text":"fs and bs are declared to be constructors for forward and backward slash .","label":5,"label_text":"OWN"} +{"text":"However , we use the following approximation :","label":5,"label_text":"OWN"} +{"text":"By noticing that this invariant is true for Ax and is preserved by the rules , we immediately can state :","label":5,"label_text":"OWN"} +{"text":"For instance , no special category assignments need to be stipulated to handle a relative clause containing a trace , because it is analyzed , via hypothetical reasoning , like a traceless clause with the trace being the hypothesis to be discharged when combined with the relative pronoun .","label":4,"label_text":"BKG"} +{"text":"For this kind of example , it is still possible to use a standard dynamic semantics , but only if there is some prior level of reference resolution which reorders the antecedents and anaphors appropriately .","label":6,"label_text":"OTH"} +{"text":"The rows , denoted ``'' , shows the percentage of correct answers in the n-th rank .","label":5,"label_text":"OWN"} +{"text":"The content model ,, and generation model ,, are components of the overall statistical model for spoken language translation given earlier .","label":5,"label_text":"OWN"} +{"text":"The set of relation edges for the entire derivation is the union of these local edge sets .","label":5,"label_text":"OWN"} +{"text":"The current model permits an arbitrary number of crossing points for crossover on the transcription string .","label":6,"label_text":"OTH"} +{"text":"Rare events , rather than being noise , can make a useful contribution to a classification task .","label":5,"label_text":"OWN"} +{"text":"We have now reached a point of recursion and can index the corresponding subtree .","label":5,"label_text":"OWN"} +{"text":"The Actual sentence evaluation is carried out in a demerit marking way .","label":5,"label_text":"OWN"} +{"text":"This arises in the present analysis for two reasons :","label":5,"label_text":"OWN"} +{"text":"Even a crude account of re-entrancy is better than completely ignoring the issue , and the one proposed gets the right result for cases of double counting such as those discussed above , but it should be obvious that there is room for improvement in the treatment which we provide .","label":5,"label_text":"OWN"} +{"text":"CONSTRAINTS","label":6,"label_text":"OTH"} +{"text":"In other words , this is ` interpretation as abduction 'Hobbs et al. 1988, since abduction , not deduction , is needed to arrive at the assumptions A .","label":5,"label_text":"OWN"} +{"text":"Taking as input a subproof , Present repeatedly executes a basic planning cycle until the input subproof is conveyed .","label":5,"label_text":"OWN"} +{"text":"If the input subsequently continues with a verb , then we have a choice of two nodes for lowering , i.e. NPand NP.","label":5,"label_text":"OWN"} +{"text":"Our experimental results indicate that for some classes of verbs the accuracy achieved in a disambiguation experiment can be improved by using the acquired knowledge of dependencies between case slots .","label":5,"label_text":"OWN"} +{"text":"Thus , the model admits both transcription schemes that result from the two views of downstep , and another besides , as shown later in.","label":5,"label_text":"OWN"} +{"text":"The meaning language could be that of any appropriate logic ; for present purposes , higher-order logic will suffice .","label":4,"label_text":"BKG"} +{"text":"That is , they add a feature value pair to any consistent f-structure .","label":6,"label_text":"OTH"} +{"text":"Hereis the set of nodes inthat should be removed to avoid violating the constraints on neighbouring cutnodes .","label":5,"label_text":"OWN"} +{"text":"Therefore , using the initial restrictions , in a sentence of 22 words or more ( counting punctuation marks as words ) there could be 100 alternative placements .","label":5,"label_text":"OWN"} +{"text":"If the rule is applicable to the word we perform look-up in the lexicon for this word and then compare the result of the guess with the information listed in the lexicon .","label":5,"label_text":"OWN"} +{"text":"A crucial assumption ofClark and Wilkes-Gibbs's work -- and ofHeeman and Hirst's model -- is that the recipient of the initial referring expression already has some knowledge of the referent in question .","label":6,"label_text":"OTH"} +{"text":"We call these verbs inertial change of position ( ICoPs ) verbs .","label":5,"label_text":"OWN"} +{"text":"This would allow one measure of the ( linguistic ) feasibility of cooperative error processing : the effectiveness of shallow processing over errors revealed by the keystroke-record data .","label":5,"label_text":"OWN"} +{"text":"Perhaps the most influential and widely-adopted semantic treatment of coordination is the approach ofPartee and Rooth 1983.","label":6,"label_text":"OTH"} +{"text":"Examination of the dialogues indicated that there seemed to be different types of control shifts : after some shifts there seemed to be a change of topic , whereas for others the topic remained the same .","label":5,"label_text":"OWN"} +{"text":"Partial parsing needs to be adapted to support the idea of the PET purview ; partial parsing that accepts any string likely to constitute part of a sentence .","label":5,"label_text":"OWN"} +{"text":"As the figure shows , the cluster model provides over one bit of information about the selectional properties of the new nouns , but the overtraining effect is even sharper than for the held-out data involving the 1000 clustered nouns .","label":5,"label_text":"OWN"} +{"text":"there is an isomorphismfromto","label":5,"label_text":"OWN"} +{"text":"Initially the weighted links are disabled .","label":5,"label_text":"OWN"} +{"text":"A common technique used for ignoring as far as possible this noise is to consider only those events that have a higher number of occurrences than a certain threshold .","label":5,"label_text":"OWN"} +{"text":"If the input structure for generation is provided by another AI-system , global problems in producing sufficient input information for the generator may occur , e.g. , because of translation mismatches in machine translationKameyama 1991.","label":4,"label_text":"BKG"} +{"text":"In,anddenote , respectively , the sets of events described by the subordinate and the main clause ,denotes the image set ofunder the temporal connective TC , i.e. the set of eventswhich are related tovia the relation TC , ( presented in) .","label":6,"label_text":"OTH"} +{"text":"However if there can be relations between every pair of semantic roles , the amount of computation to identify the relations that hold in the given sentence is extremely large .","label":4,"label_text":"BKG"} +{"text":"In, the reduction rule is augmented to handle indices .","label":5,"label_text":"OWN"} +{"text":"We can visualise the result in terms of a three-dimensional tree structure , where the merged material is on one plane , and the syntax trees for each conjunct are on two other planes .","label":6,"label_text":"OTH"} +{"text":"In the method described here , [s] is adjudged closer to [g] than to [h] .","label":5,"label_text":"OWN"} +{"text":"The ANLT grammar contains more than five times as many rules than does the sentence-level portion of the CLARE2.5 grammar , andAlshawi( personal communication ) points out that the CLE parser had not previously been run with a grammar containing such a large number of rules , in contrast to the ANLT parsers .","label":5,"label_text":"OWN"} +{"text":"Discourse Goals : Domain Plan Negotiation","label":5,"label_text":"OWN"} +{"text":"The following paragraphs describe these components in more detail .","label":0,"label_text":"TXT"} +{"text":"The temporal clause may be processed before the main clause , since t ' , the location time of e ' , which ` replaces ', the reference time ofPartee's analysis , as the temporal index of the eventuality in the the main clause , arises from processing the main clause ( not updating the reference time of the subordinate clause ) .","label":5,"label_text":"OWN"} +{"text":"To explore the phenomenon of control in relation to ATTENTIONAL STATEGrosz and Sidner 1986,Grosz et al. 1986,Sidner 1979.","label":2,"label_text":"AIM"} +{"text":"Due to the beam search heuristic and the ambiguity packing scheme , this set of parses is limited to maximal or close to maximal grammatical subsets .","label":6,"label_text":"OTH"} +{"text":"This is consistent with our earlier assumptions about the source language model .","label":5,"label_text":"OWN"} +{"text":"Proof : a proof schema , which characterizes the syntactical structure of a proof segment for which this operator is designed .","label":5,"label_text":"OWN"} +{"text":"The SELECTIONAL ASSOCIATION between a predicate and a word is defined based on the contribution of the word to the conditional entropy of the predicate .","label":6,"label_text":"OTH"} +{"text":"It plays an important role in machine translationPustejovsky and Nirenburg 1987.","label":4,"label_text":"BKG"} +{"text":"The different techniques are experimentally evaluated in section.","label":0,"label_text":"TXT"} +{"text":"This information must be exchanged , so that the mutual beliefs necessary to develop the collaborative plan are established in the conversationJoshi 1982.","label":5,"label_text":"OWN"} +{"text":"Still , the sentence needs to be interpreted relative to a reference time .","label":6,"label_text":"OTH"} +{"text":"where the indices,andare mnemonic for Canadian flag , American flag and house .","label":5,"label_text":"OWN"} +{"text":"Each obligation on the stack is represented as an obligation type paired with a content .","label":6,"label_text":"OTH"} +{"text":"For these values , the improvement in perplexity for unseen bigrams in a held-out 18 thousand word sample , in which 10.6 % of the bigrams are unseen , is just over 20 % .","label":5,"label_text":"OWN"} +{"text":"Nouns that do not have both forms , like equipment or scissors , require a classifier to be used .","label":4,"label_text":"BKG"} +{"text":"It can be used to specify the states of a left to right parser and the possible mappings between states .","label":6,"label_text":"OTH"} +{"text":"The penalties of the different features are then combined into a single score using a linear combination .","label":5,"label_text":"OWN"} +{"text":"mutation-error hypothesis :","label":6,"label_text":"OTH"} +{"text":"The probabilities associated with phrases in the above description are computed according to the statistical models for analysis , translation , and generation .","label":5,"label_text":"OWN"} +{"text":"SeeMiller 1991for a discussion of how this may be used for evaluation of functional programs by `` pushing '' the evaluation through abstractions to reduce redexes that are not at the top-level .","label":5,"label_text":"OWN"} +{"text":"The special symbol * is a wildcard matching any context , with no length restrictions .","label":6,"label_text":"OTH"} +{"text":"Obviously , a bottom-up component is required .","label":5,"label_text":"OWN"} +{"text":"A subcase of ensuring that certain inferences get made involves the juxtaposition of two propositions .","label":1,"label_text":"CTR"} +{"text":"Contexts with rare words ( less than ten occurrences ) were also excluded for similar reasons : If a word only occurs nine or fewer times its left and right context vectors capture little information for syntactic categorization .","label":5,"label_text":"OWN"} +{"text":"Most computational models of discourse are based primarily on an analysis of the intentions of the speakersCohen and Perrault 1979,Allen and Perrault 1980,Grosz and Sidner 1986.","label":6,"label_text":"OTH"} +{"text":"Thus conceptually at least , their processes are agglomerative : a large initial set of words is clumped into a smaller number of clusters .","label":6,"label_text":"OTH"} +{"text":"The leaf nodes represent the unique states in the decision-making problem , i.e. all contexts which lead to the same leaf node have the same probability distribution for the decision .","label":6,"label_text":"OTH"} +{"text":"Volume 1 ofWagner 1958consists of 300 maps , plotting about 370 concepts .","label":6,"label_text":"OTH"} +{"text":"We repeat this process until we can no longer find a modification that improves the current hypothesis grammar .","label":5,"label_text":"OWN"} +{"text":"The values of mutual information and thresholds for all node pairs are shown in Table.","label":6,"label_text":"OTH"} +{"text":"is the cost of an insertion-error for a terminal symbol .","label":6,"label_text":"OTH"} +{"text":"Therefore these constraints are local in subordinate clause .","label":5,"label_text":"OWN"} +{"text":"Research on the quantity of evidence indicates that there is no optimal amount of evidence , but that the use of high-quality evidence is consistent with persuasive effectsReinard 1988.","label":6,"label_text":"OTH"} +{"text":"The length of the representation of the integer n is given by the function","label":5,"label_text":"OWN"} +{"text":"The number of iterative cycles that are necessary depends on the threshold chosen for the trained net to cross , and on details of the vector representation .","label":5,"label_text":"OWN"} +{"text":"We have also argued that an architecture that uses obligations provides a much simpler implementation than the strong plan-based approaches .","label":1,"label_text":"CTR"} +{"text":"A more thoroughgoing comparison of these two approaches to the problem needs to be undertaken .","label":5,"label_text":"OWN"} +{"text":"This theory yields the following definition :","label":5,"label_text":"OWN"} +{"text":"The networks works in the following way : for a target-language f-structure to be generated , the transfer system knows its phrasal category and its corresponding source-language f-structure from the networks that perform the sub-task.","label":5,"label_text":"OWN"} +{"text":"In fact , this is not so .","label":5,"label_text":"OWN"} +{"text":"If this information is not to be lost , some way of referring to the structure of the compositions , as well as to their results , seems to be required .","label":5,"label_text":"OWN"} +{"text":"We also looked at when no signal was given ( interruptions ) .","label":5,"label_text":"OWN"} +{"text":"In terms of ambiguity rates , the English , French , and German texts are thus quite comparable .","label":5,"label_text":"OWN"} +{"text":"For instance if one assumes a mechanism of default unification , one can have $ strong refer to the full entry describing ` strong ' in say its ordinary use , and have the values that are particular to the collocational strong overwrite the values provided in the ordinary entry , as inMel'cuk's proposal .","label":6,"label_text":"OTH"} +{"text":"After finding that big may not be conjoined with the brown dog , we try to adjoin it within the latter .","label":5,"label_text":"OWN"} +{"text":"To summarize , our monolingual models are specified by :","label":5,"label_text":"OWN"} +{"text":"In contrast , constraining equations are intended to constrain the value of an already existing feature-value pair .","label":6,"label_text":"OTH"} +{"text":"It is essentially in addressing the issue of overgenerality thatMel'cukintroduces sub - and superscripts to lexical functions , enhancing their precision and making them sensitive to meaning aspects of the lexical items over which they operate .","label":6,"label_text":"OTH"} +{"text":"A big tagset does not cause trouble for a constraint-based tagger because one can refer to a combination of tags as easily as to a single tag .","label":6,"label_text":"OTH"} +{"text":"Moreover , since a daughter shares subtrees with itself , a special case of the condition is that subtrees occurring within some daughter can only appear as siblings in the mother .","label":5,"label_text":"OWN"} +{"text":"The work on stochastic context-free grammars suggests a different set of results , in that the specific categories involved in expansions are all important .","label":1,"label_text":"CTR"} +{"text":"This small lexicon contained only 5,456 entries out of 53,015 entries of the original Brown Corpus lexicon .","label":5,"label_text":"OWN"} +{"text":"The reason is that there are now two sources of recursion : in the DCG and in the FSA ( cycles ) .","label":1,"label_text":"CTR"} +{"text":"PLAN QUALITY :","label":5,"label_text":"OWN"} +{"text":"In this corpus , the chosen nouns appear as direct object heads of a total of 2147 distinct verbs , so each noun is represented by a density over the 2147 verbs .","label":5,"label_text":"OWN"} +{"text":"Once we have this mechanism , we can use it to construct interpretations of sentences like-.","label":5,"label_text":"OWN"} +{"text":"In this paper , the rhetorical structure is represented by two layers : intra-paragraph and inter-paragraph structures .","label":5,"label_text":"OWN"} +{"text":"de Swart 1991seesPartee's quantification problem as a temporal manifestation of the proportion problem , which arises in cases such asKadmon 1990.","label":6,"label_text":"OTH"} +{"text":"This interpolation virtually enlargedWest's 5,000,000 - word corpus .","label":3,"label_text":"BAS"} +{"text":"The important points to note about this converted form are the following :","label":5,"label_text":"OWN"} +{"text":"It can incrementally adapt to new experiences simply by adding new data to the training samples and modifying the associations according to the changed statistics .","label":6,"label_text":"OTH"} +{"text":"In passage, both the tense and the coherence relation order the times in backward progression .","label":5,"label_text":"OWN"} +{"text":"In the first experiment we tagged the text with the Brown Corpus lexicon supplied with the taggers and hence had only those unknown words which naturally occur in this text .","label":5,"label_text":"OWN"} +{"text":"The inclusion of the operator increases the complexity to exponential .","label":5,"label_text":"OWN"} +{"text":"First , we are planning to apply the algorithm to an as yet untagged language .","label":5,"label_text":"OWN"} +{"text":"The linguistic study of French intransitive motion verbsAsher and Sablayrolles 1994awe have realized has allowed the definition of an ontology for `` location '' in three basic concepts :","label":6,"label_text":"OTH"} +{"text":"Each simulation was run on each sample , for a total of twelve DIST runs .","label":5,"label_text":"OWN"} +{"text":"free energy function","label":6,"label_text":"OTH"} +{"text":"one or more RHS clauses may be marked as optional ;","label":5,"label_text":"OWN"} +{"text":"This processing described above has been implemented in ALT-J \/ E. It was tested , together with new processing to generate articles , on a specially constructed set of test sentences , and on a collection of newspaper articles .","label":5,"label_text":"OWN"} +{"text":"Therefore , the VP in the third clause need not be reconstructed , and the subsequent semantically-based resolution of the anaphoric form succeeds .","label":5,"label_text":"OWN"} +{"text":"The 3-D approaches and processing strategies use syntactic context more directly , and it is to these methods which we now turn .","label":6,"label_text":"OTH"} +{"text":"We chose instead to deal with the ordering problem by using off-line compilation to automatically optimize a grammar such that it can be used for generation , without additional provision for dealing with the evaluation order , by our Earley generator .","label":5,"label_text":"OWN"} +{"text":"This finding confirms the idea that distribution and phonotactics are useful sources of information that infants might use in discovering wordsJusczyk et al. 1993b.","label":3,"label_text":"BAS"} +{"text":"The interpolated modelis used in the back-off scheme as, to obtain better estimates for unseen bigrams .","label":5,"label_text":"OWN"} +{"text":"Coordinations are classified as non-constituent coordination if the conjuncts fail to be constituents in a ` standard ' phrase structure grammar .","label":5,"label_text":"OWN"} +{"text":"In doing this , we assume the existence , for each type of pragmatic inference , of a set of necessary conditions that must be true in order for that inference to be triggered .","label":5,"label_text":"OWN"} +{"text":"Next , one agent or the other must refashion the referring expression plan in the context of the judgment by either replacing some of its actions ( by using replace-plan ) or by adding new actions to it ( by using expand-plan ) .","label":6,"label_text":"OTH"} +{"text":"This dataflow analysis takes as input a specification of the paths of the start category that are considered fully instantiated .","label":5,"label_text":"OWN"} +{"text":"First we need to calculate the entropy of each or-node .","label":5,"label_text":"OWN"} +{"text":"The weights used in this scheme are adjustable , and can be optimized for a particular domain and \/ or grammar .","label":5,"label_text":"OWN"} +{"text":"The relations exemplified in Tableare used for representing the rhetorical structure .","label":5,"label_text":"OWN"} +{"text":"Most of them describe meaning of words using special symbols like microfeaturesWaltz and Pollack 1985,Hendler 1989that correspond to the semantic dimensions .","label":6,"label_text":"OTH"} +{"text":"Another issue that has to be raised concerns the translation of collocations into non-collocational constructions .","label":5,"label_text":"OWN"} +{"text":"As language differences get smaller , one expects that more data will be required in order to elucidate them .","label":5,"label_text":"OWN"} +{"text":"What is the word being tagged ?","label":6,"label_text":"OTH"} +{"text":"However , comparing the average performances of simulations is also useful .","label":5,"label_text":"OWN"} +{"text":"A shorter version of this paper appears in the Proceedings of the ACL Conference on Applied Natural Language Processing , Stuttgart , October 1994 , and is Association for Computational Linguistics .","label":5,"label_text":"OWN"} +{"text":"Unlike a simple Markov process , there are a potentially infinite number of states , so there is inevitably a problem of sparse data .","label":5,"label_text":"OWN"} +{"text":"Let us consider an example `` SinGataKansetuZei '' .","label":5,"label_text":"OWN"} +{"text":"On the sloppy reading Simon loves Simon 's mother .","label":4,"label_text":"BKG"} +{"text":"perfect match :","label":6,"label_text":"OTH"} +{"text":"This section describes procedures to acquire collocational information for analyzing compound nouns from a corpus of four kanzi character words .","label":0,"label_text":"TXT"} +{"text":"We described measurement of semantic similarity between words .","label":2,"label_text":"AIM"} +{"text":"We have argued that this is the correct distinction to make , and have given a treatment of the second issue .","label":5,"label_text":"OWN"} +{"text":"Lexical Functions abstract away from certain nuances in meaning and from different syntactic realizations .","label":5,"label_text":"OWN"} +{"text":"N to","label":5,"label_text":"OWN"} +{"text":"Apply the anaphora module first .","label":5,"label_text":"OWN"} +{"text":"So , parsing systems are likely to have extragrammatical sentences which cannot be analyzed by the systems .","label":5,"label_text":"OWN"} +{"text":"Some early attempts at vector representation in psycholinguistics were the semantic differential approachOsgood et al. 1957and the associative distribution approachDeese 1962.","label":6,"label_text":"OTH"} +{"text":"If X is a syntactic type ( e.g. s , np ) , thenis a category .","label":5,"label_text":"OWN"} +{"text":"For a sentence without any explicit connective expressions , extension relation is set to the sentence .","label":5,"label_text":"OWN"} +{"text":"An apparent local minimum in the spacemay no longer be a local minimum in the space; the extra dimension may provide a pathway for further improvement of the hypothesis grammar .","label":5,"label_text":"OWN"} +{"text":"ForSiadhail, the ultimate scientific justification in adopting the three-dialect account is the fact that the Gaeltacht ( Irish-speaking territory ) is so fragmented nowadays that it no longer forms a continuum .","label":6,"label_text":"OTH"} +{"text":"Specifically , if a treatment such asHinrichs's is used to explain the forward progression of time in example, then it must be explained why sentenceis as felicitous as sentence.","label":1,"label_text":"CTR"} +{"text":"Evidence of acceptance may be given explicitly , but acceptance can be inferred in some dialogue situations via the operation of a simple principle of cooperative dialogue :","label":5,"label_text":"OWN"} +{"text":"I showed that it is desirable for phonologists working on tone to use sequences of Fvalues as their primary data , rather than impressionistic transcriptions which make ( usually implicit ) assumptions about Fscaling .","label":5,"label_text":"OWN"} +{"text":"Whenis applied to the predicate ,will result after- reduction .","label":6,"label_text":"OTH"} +{"text":"Evaluation of tagging accuracy on unknown words using texts unseen by the guessers and the taggers at the training phase showed that tagging with the automatically induced cascading guesser was consistently more accurate than previously quoted results known to the author ( 85 % ) .","label":1,"label_text":"CTR"} +{"text":"S is a total function from U to","label":5,"label_text":"OWN"} +{"text":"Motivation and exemplification for the model is provided by data taken from my fieldwork on Bamileke Dschang ( Cameroon ) .","label":5,"label_text":"OWN"} +{"text":"Further examples which are problematic forSag et al.are given byJorgensen and Abeille 1992.","label":1,"label_text":"CTR"} +{"text":"The reasoning leading up to utteranceis similar to that leading to utterance.","label":5,"label_text":"OWN"} +{"text":"In other words , introducing linguistically relevant information such as gender into the tagset is fine , but if this information is not used in the linguistically relevant context , the benefit is unclear .","label":5,"label_text":"OWN"} +{"text":", were missing in the small lexicon .","label":5,"label_text":"OWN"} +{"text":"Therefore the proposed method translates it with the same number as the subject .","label":5,"label_text":"OWN"} +{"text":"The concept c that maximizes the expression inwill be referred to as the most informative subsumer ofand.","label":5,"label_text":"OWN"} +{"text":"A second problem with the account is that , as withSteedman's coordination schema ,Partee and Rooth's type-raising strategy only applies to coordinate structures .","label":1,"label_text":"CTR"} +{"text":"Deleted interpolation estimates a modelby using a linear combination of empirical modelswhereandfor all","label":6,"label_text":"OTH"} +{"text":"INTERRUPTION :","label":6,"label_text":"OTH"} +{"text":"Our dataflow analysis ignores the grammatical head , but identifies instead the ` processing head ' , and ( no less importantly ) the ` first processing complement ' , the ` second processing complement ' , and so on .","label":5,"label_text":"OWN"} +{"text":"Commands - utterances which were intended to instigate action in their audience .","label":5,"label_text":"OWN"} +{"text":"The minimal purpose of any dialogue is that an utterance be understood , and this goal is a prerequisite to achieving other goals in dialogue , such as commitment to future action .","label":5,"label_text":"OWN"} +{"text":"Also , because the grammar has no means of enforcing number agreement , the system systematically prefers plurals to singulars , even when doing this will lead to agreement clashes .","label":1,"label_text":"CTR"} +{"text":"Temporal adverbials restrict the location time : temporal adverbs introduce a DRS-condition on the location time , while temporal subordinate clauses introduce a relation between the event time of the subordinate clause and the location time of the main clause .","label":5,"label_text":"OWN"} +{"text":"The Subcategorization Principle involves an operation on lists ( append\/3 or delete\/3 in different formalizations ) that does not need bottom-up processing , but can better be evaluated by top-down resolution if its arguments are sufficiently instantiated .","label":5,"label_text":"OWN"} +{"text":"The probabilities tell us that the corpus contains no free-standing structures of type num .","label":5,"label_text":"OWN"} +{"text":"Non-deterministic search methods have been devised as a way of tackling large-scale combinatorial optimisation problems , problems that involve finding optima of functions of discrete variables .","label":4,"label_text":"BKG"} +{"text":"Stewarttreats it as a total downstep language , i.e. whereH appears as an L tone ( with respect to the material to its left ) .","label":6,"label_text":"OTH"} +{"text":"The input to the net is derived from the candidate strings , the sequences of tags and hypertags .","label":5,"label_text":"OWN"} +{"text":"Reyle 1993developed an inference system for Underspecified Discourse Representation Structures ( UDRS 's ) , i.e. Discourse Representation StructuresKamp and Reyle 1993which are underspecified with respect to scope .","label":6,"label_text":"OTH"} +{"text":"The temporal relation in the sentence is inclusion between the event time of Anne 's coming home , and the location time of the result state of Paul 's already having prepared dinner .","label":5,"label_text":"OWN"} +{"text":"Moreover , frequency is relative to texts , not to sentences .","label":5,"label_text":"OWN"} +{"text":"where","label":5,"label_text":"OWN"} +{"text":"Non-monotonic changes preserve these results .","label":5,"label_text":"OWN"} +{"text":"The middle ground involves taking the type of object into account when choosing attributes and landmarks that relate to it .","label":5,"label_text":"OWN"} +{"text":"One possibility corresponds to the prediction of an ss modifier , a second to the prediction of an ( nps )( nps ) modifier ( i.e. a verb phrase modifier ) , a third to there being a function which takes the subject and the verb as separate arguments , and the fourth corresponds to there being a function which requires an s \/ np argument .","label":5,"label_text":"OWN"} +{"text":"calculate the preferences of every structure of the compound noun according to the frequencies of category collocations","label":5,"label_text":"OWN"} +{"text":"The model handles errors vocalisation , diacritics , phonetic syncopation and morphographemic idiosyncrasies , in addition toDamerauerrors .","label":5,"label_text":"OWN"} +{"text":"Narrative progression is dealt with by using the feature Rpt ( or reference point ) .","label":5,"label_text":"OWN"} +{"text":"This last condition ensures that the target graph partitions join up in a way that is compatible with the node alignment f .","label":5,"label_text":"OWN"} +{"text":"The first is that currently the rules for determining the noun phrase referentiality are insufficiently fine .","label":5,"label_text":"OWN"} +{"text":"They are either primitive PCAs , or are recursive calls to the procedure Present for subproofs .","label":5,"label_text":"OWN"} +{"text":"That is , the context vector for context ,","label":6,"label_text":"OTH"} +{"text":"Although attempts have been made to modify PS grammars \/ parsers to cope with extragrammatical inputCarbonell and Hayes 1983,Douglas and Dale 1992,Jensen et al. 1983,Mellish 1989, this is a feature which has to be ` added on ' and tends to affect the statement of the grammar .","label":1,"label_text":"CTR"} +{"text":"For our initial grammar , we choose a grammar that can generate any string , to assure that the grammar can cover the training data .","label":5,"label_text":"OWN"} +{"text":"The class accumulates enough evidence provided by erroneously extracted triples .","label":5,"label_text":"OWN"} +{"text":"To assist in this , low frequency items in the lexicon are grouped together into equivalence classes , such that all words in a given equivalence class have the same tags and lexical probabilities , and whenever one of the words is looked up , then the data common to all of them is used .","label":6,"label_text":"OTH"} +{"text":"As nothing gets predicted from a passive edge ( 4 ) , it does not have a forward index .","label":5,"label_text":"OWN"} +{"text":"This problem is analyzed inde Swart 1991as an instance of the proportion problem and given a solution from a Generalized Quantifier approach .","label":6,"label_text":"OTH"} +{"text":"We conducted some experiments in order to test the effectiveness of this strategy .","label":5,"label_text":"OWN"} +{"text":"The objective of this algorithm is to parse input string with the least number of errors .","label":6,"label_text":"OTH"} +{"text":"If the similarity-based estimate is relatively high , a bigram would receive a higher estimate than predicted by the uniform discounting method .","label":5,"label_text":"OWN"} +{"text":"Another hypothetical factor was the relative cost of retrieval and communication .","label":5,"label_text":"OWN"} +{"text":"Figureshows the relevant parts of the discourse state after interpretation of this utterance .","label":5,"label_text":"OWN"} +{"text":"However , although the practical throughput of parsers with such realistic grammars is important , for example when processing large amounts of text or in interactive applications , there is little published research that compares the performance of different parsing algorithms using wide-coverage unification-based grammars .","label":1,"label_text":"CTR"} +{"text":"The task consists judging which of two verbs v and v ' is more likely to take a given noun n as object , when all occurrences of ( v , n ) in the training set were deliberately deleted .","label":5,"label_text":"OWN"} +{"text":"We defined an artificial class-based model and generated some data according to its distribution .","label":5,"label_text":"OWN"} +{"text":"We found that each source provided some useful information for speech segmentation , but the combination of sources provided substantial information .","label":5,"label_text":"OWN"} +{"text":"Hypothetical reasoning by keeping track of dependencies between items","label":3,"label_text":"BAS"} +{"text":"By the same way as the subordinate clause case is dealt with , the zero subject of the main clauseis known to refer to Hanako , too .","label":6,"label_text":"OTH"} +{"text":"Chris is not a bachelor presupposes that Chris is a male adult ; Chris regrets that Mary came to the party presupposes that Mary came to the party .","label":5,"label_text":"OWN"} +{"text":"Thirdly , LFG grammars impose constraints on zoomin .","label":5,"label_text":"OWN"} +{"text":"Similarly, since.","label":5,"label_text":"OWN"} +{"text":"The number of the noun phrase is then determined by the countability preference of the noun phrase heading it .","label":5,"label_text":"OWN"} +{"text":"Carpenter's ALECarpenter 1993allows the user to define the type hierarchy of a grammar by writing a collection of clauses which together denote an inheritance hierarchy , a set of features and a set of appropriateness conditions .","label":6,"label_text":"OTH"} +{"text":"Broader issues raised by this work are noted and discussed .","label":5,"label_text":"OWN"} +{"text":"However , an analysis of the relationship between the kinds of solutions found , the two R tables and the parameter values h , l and d has not been attempted .","label":5,"label_text":"OWN"} +{"text":"After cycle removal , incorporating relevant indexing and the collapsing of redundant magic predicates the magic-compiled grammar from figurelooks as displayed in figure.","label":5,"label_text":"OWN"} +{"text":"A comparison was made of vectors derived by using ordinary co-occurrence statistics from large text corpora and of vectors derived by measuring the inter-word distances in dictionary definitions .","label":2,"label_text":"AIM"} +{"text":"This can lead to nontermination as the tree fragments enumerated in bottom-up evaluation of magic compiled grammars are connectedJohnson forthcoming.","label":1,"label_text":"CTR"} +{"text":"However , the string positions are useful as an indexing of the items so that it can be easily detected whether their combination can contribute to a proof of the goal .","label":5,"label_text":"OWN"} +{"text":"If the head constituent of an NP falls within the scope of a denumerator it is countable .","label":6,"label_text":"OTH"} +{"text":"The `` learned \/ humanities '' subcategory is , as before , problematic : only two of the eighteen items were correctly classified .","label":5,"label_text":"OWN"} +{"text":"In a dynamic grammar , any substring of a sentence can be assigned a type .","label":6,"label_text":"OTH"} +{"text":"More specifically , this problem arises when a complement receives essential restricting information from the head of the construction from which it has been extracted , while , at the same time , it provides essential restricting information for the complements that stayed behind .","label":5,"label_text":"OWN"} +{"text":"Neither the observation that trigrams may represent the limit of usefulness for N-gram modeling in ATIS , nor that non-trivial contextual influences exist between occurrences of grammar rules , is very novel or remarkable in its own right .","label":5,"label_text":"OWN"} +{"text":"whereandare the node sets forandrespectively , andis the set of edges for the target graph .","label":5,"label_text":"OWN"} +{"text":"By assigning a probability distribution to the possible choices , decision trees provide a ranking system which not only specifies the order of preference for the possible choices , but also gives a measure of the relative likelihood that each choice is the one which should be selected .","label":6,"label_text":"OTH"} +{"text":"However , if we assume that structures of that kind do not occur , a depth-first interpreter will be sufficient , e.g. the inference rules of the algorithm can be encoded and interpreted directly in Prolog .","label":6,"label_text":"OTH"} +{"text":"whererepresents the discounted estimate for seen bigrams ,the model for probability redistribution among the unseen bigrams , andis a normalization factor .","label":6,"label_text":"OTH"} +{"text":"These 134 tags are then mapped onto the small customised tagsets .","label":5,"label_text":"OWN"} +{"text":"The developed techniques are direction independent in the sense that they can be used for both generation and parsing with HPSG grammars .","label":5,"label_text":"OWN"} +{"text":"Consider the TNCBs in figure.","label":5,"label_text":"OWN"} +{"text":"On the other hand , conventional knowledge or script-based abstract generation systemsLehnert 1980,Fum 1986, owe their success to the limitation of the domain , and cannot be applied to document with varied subjects , such as popular scientific magazine .","label":1,"label_text":"CTR"} +{"text":"An alternative is to allow an unbounded stack to be shared between two ( or more ) daughters but not with the mother .","label":5,"label_text":"OWN"} +{"text":"We are not aware of any formalism or computational approach that offers a unified explanation for the cancellability of pragmatic inferences in general , and of no approach that handles cancellations that occur in sequences of utterances .","label":1,"label_text":"CTR"} +{"text":"A comparison was made of co-occurrence vectors from large text corpora and of distance vectors from dictionary definitions .","label":2,"label_text":"AIM"} +{"text":"We run marked up training data through an early version of the network trained on the same data , so the results should be almost all correct .","label":5,"label_text":"OWN"} +{"text":"In this paper , we view the problem of learning case frame patterns as that of learning a multi-dimensional discrete joint distribution , where random variables represent case slots .","label":5,"label_text":"OWN"} +{"text":"To date , grammar checkers and other programs which deal with illformed input usually step directly from spelling considerations to a full-scale sentence parse .","label":6,"label_text":"OTH"} +{"text":"Sentence analysis accomplishes morphological and syntactic analysis for each sentence .","label":5,"label_text":"OWN"} +{"text":"The manner in which the global probability is calculated will be partly dependent upon the information contained in the local probability calculations .","label":5,"label_text":"OWN"} +{"text":"The system is being improved by identifying groups of words that act as single lexical items .","label":5,"label_text":"OWN"} +{"text":"Sagthen suggests a weakening of his condition , with the result that both of the above examples are incorrectly predicted to be acceptable ; he doesn't consider a solution predicting the judgements as stated .","label":1,"label_text":"CTR"} +{"text":"For case, the spelling rules may be applied directly , just as in rule compilation , to a specified surface or lexical character sequence , as if no lexical or morphotactic constraints existed .","label":5,"label_text":"OWN"} +{"text":"These symbolic and statistical approaches are beginning to draw together as it becomes clear that one cannot exist entirely without the other : the knowledge of language posited over the years by theoretical linguists has been useful in constraining and guiding statistical approaches , and the corpora now available to linguists have resurrected the desire to account for real language data in a more principled way than had previously been attempted .","label":4,"label_text":"BKG"} +{"text":"For example , in Dynamic Predicate LogicGroenendijk and Stokhof 1991, states are threaded from the antecedent of a conditional into the consequent , and from a restrictor of a quantifier into the body .","label":6,"label_text":"OTH"} +{"text":"The following are some notions useful for the formulation of the presentation operators :","label":5,"label_text":"OWN"} +{"text":"A DCG is a simple example of a family of constraint-based grammar formalisms that are widely used in natural language analysis ( and generation ) .","label":4,"label_text":"BKG"} +{"text":"This account replaces that assumption with a model in which the evidence of the hearer must be considered to establish mutual beliefs .","label":5,"label_text":"OWN"} +{"text":"Penalty scores are imposed on the structure candidates violating the preference rules .","label":5,"label_text":"OWN"} +{"text":"Usually this feature-clash situation creates the problem of which constituent to give preference toLanger 1990.","label":5,"label_text":"OWN"} +{"text":"This work is supported by ARO grant DAAL03 - 89 - 0031 , DARPA grant N00014 - 90-J - 1863 , and ARO grant DAAH04 - 94-G - 0426 .","label":5,"label_text":"OWN"} +{"text":"A sloppy substitution involves substituting a new term index for the old one .","label":5,"label_text":"OWN"} +{"text":"Figureshows an example thesaurus for the 20 most frequently occurred nouns in the data , constructed based on their appearances as subject and object of roughly 2000 verbs .","label":5,"label_text":"OWN"} +{"text":"The results are quite widely spread .","label":5,"label_text":"OWN"} +{"text":"Tone Transcription","label":5,"label_text":"OWN"} +{"text":"We inductively define a three place relationwhich holds between models, nodes n and wffs.","label":5,"label_text":"OWN"} +{"text":"The parsers create parse forestsTomita 1987that incorporate subtree sharing ( in which identical sub-analyses are shared between differing superordinate analyses ) and node packing ( where sub-analyses covering the same portion of input whose root categories are in a subsumption relationship are merged into a single node ) .","label":6,"label_text":"OTH"} +{"text":"When seeking an intersentential co-specification ,Hobbsalgorithm searches the parse tree of the previous utterance breadth-first , from left to right .","label":6,"label_text":"OTH"} +{"text":"Our approach to parallelism is perhaps heavy-handed , but in the absence of a clear solutions , possibly more flexible .","label":5,"label_text":"OWN"} +{"text":"The second arises from strictly identifying the pronouns , while sloppily identifying the books .","label":4,"label_text":"BKG"} +{"text":"Proposition 21 .","label":5,"label_text":"OWN"} +{"text":"B may not think that p is relevant ,","label":5,"label_text":"OWN"} +{"text":"F is true of u under I iff [ F is a feature structure,","label":5,"label_text":"OWN"} +{"text":"Third , why don't these utterance correlate with typical stalling behavior such as false starts , pauses , and filled pauses such as uhhh .","label":5,"label_text":"OWN"} +{"text":"That is why , the N-object cannot inherit a case value and also does not know whether it is allowed to occupy the front position in the utterance .","label":5,"label_text":"OWN"} +{"text":"Finally , it is worth noting why it is necessary to use h-lists .","label":5,"label_text":"OWN"} +{"text":"for some effective function, [ for eachand each, [ ifis defined thenotherwise` undefined '","label":5,"label_text":"OWN"} +{"text":"In addressing this example ,Lascarides and Asherspecify a special rule ( the Connections When Changing Tense ( CCT ) Law ) that stipulates that a sentence containing the simple past followed by a sentence containing the past perfect can be related only by a subset of the otherwise possible coherence relations .","label":6,"label_text":"OTH"} +{"text":"As for clustering the sites into dialect areas , the familiar bottom-up agglomeration method proves superior to top-down partitioning around medoids .","label":1,"label_text":"CTR"} +{"text":"Thus instead of doing all quantifier scoping at the end of the sentence , each new quantifier is scoped relative to the existing quantifiers ( and operators such as negation , intensional verbs etc ) .","label":5,"label_text":"OWN"} +{"text":"For , the size of the domain of nonterminals and associated stacks ( the analogue of the nonterminals in cfg ) is not bound by the grammar .","label":6,"label_text":"OTH"} +{"text":"These conclusions are based partly on an analysis of the mathematical properties of the clusters themselves , partly on how well they correlate with analyses based on more traditional isogloss techniques , and partly on how well they compare with previously-published descriptions of dialects in a specific language , Irish Gaelic .","label":5,"label_text":"OWN"} +{"text":"For example , the following query","label":5,"label_text":"OWN"} +{"text":"A second problem is that many isoglosses do not neatly bisect the language area .","label":1,"label_text":"CTR"} +{"text":"They do not take GEN ` the ' , because then the noun phrase would normally be interpreted as having definite reference .","label":5,"label_text":"OWN"} +{"text":"To take into consideration the statistical significance of the alternatives involved , before doing a generalization step , climbing upwards ,","label":5,"label_text":"OWN"} +{"text":"Definition 22 .","label":5,"label_text":"OWN"} +{"text":"The associations can be obtained statistically according to the network 's experiences .","label":6,"label_text":"OTH"} +{"text":"The concurrent calculus is modeled with Chemical Abstract Machine .","label":3,"label_text":"BAS"} +{"text":"some IRU strategies are only beneficial when inferential complexity is higher than in the Standard TaskRambow and Walker 1994,Walker 1994a.","label":6,"label_text":"OTH"} +{"text":"An example would be that `` and '' or `` but '' signal to the listener that a new topic and set of referents is being introduced whereas `` anyway '' and `` in any case '' indicate a return to a previous topic and referent set .","label":6,"label_text":"OTH"} +{"text":"corresponding to the local tree rooted at h with dependent nodes.","label":5,"label_text":"OWN"} +{"text":"First order logic was assumed as the semantic representation language because it comes with well understood , if not very practical , inferential machinery for constraint solving .","label":5,"label_text":"OWN"} +{"text":"The original examples , such as the following ,","label":4,"label_text":"BKG"} +{"text":"The operatorcaters for obligatory rules .","label":6,"label_text":"OTH"} +{"text":"Local weight could be obtained using.","label":5,"label_text":"OWN"} +{"text":"how the languages are represented in human mind ;","label":5,"label_text":"OWN"} +{"text":"Both rules produce a magic fact with which a subject np can be built .","label":5,"label_text":"OWN"} +{"text":"The criterion that remains to be satisfied is that of width of coverage : can the formalism cope with the many `` peripheral '' structures found in real written and spoken texts ?","label":5,"label_text":"OWN"} +{"text":"The feature value `` compulsory '' indicates that if the applicability condition is satisfied , and the style of the operator conforms to the global style the text planner is committed to , this operator should be chosen .","label":5,"label_text":"OWN"} +{"text":"The crucial point is the provision of some way of storing the extracted part of the interpretation and making it available when required .","label":5,"label_text":"OWN"} +{"text":"V_AND_NP_LIST : Contains content words found in this DCU , and is used to compare the content words of the current DCU with those in previous threads , in order to rate the semantic `` closeness '' of the DCU to each thread .","label":5,"label_text":"OWN"} +{"text":"Third , unlike conventional knowledge or script-based abstract generation systemsLehnert 1980,Fum 1986, the rhetorical structure extraction does not need prepared knowledge or scripts related to the original text , and can be used for texts of any domain , so long as they contain enough rhetorical expressions to be expository writings .","label":1,"label_text":"CTR"} +{"text":"In this paper , we propose a method of learning dependencies between case frame slots .","label":2,"label_text":"AIM"} +{"text":"High-level Discourse Goals","label":5,"label_text":"OWN"} +{"text":"Our system is data driven as far as possible : the rules are invoked if they are needed to make the problem computationally tractable .","label":5,"label_text":"OWN"} +{"text":"We will see below that the use of relative entropy for similarity measure makesvanish at the maximum as well , so the log likelihood can be maximized by minimizing the average distortion with respect to the class centroids while class membership is kept fixed","label":5,"label_text":"OWN"} +{"text":"From a given solution ( set of triples )we can compute in polynomial time a mapping k that sends the index of an element to the index of its solution triple , i.e. ,.","label":5,"label_text":"OWN"} +{"text":"The third utterance in this example has two interpretations which are both consistent with the centering rules and constraints .","label":5,"label_text":"OWN"} +{"text":"'' While nominal centering assumes there is one object that the current discourse is `` about , '' temporal centering assumes that there is one thread that the discourse is currently following , and that , in addition to tense and aspect constraints , there is a preference for a new utterance to continue a thread which has a parallel tense or which is semantically related to it and a preference to continue the current thread rather than switching to another thread .","label":6,"label_text":"OTH"} +{"text":"But agents can decide whether as well as how to revise their beliefsGalliers 1991.","label":5,"label_text":"OWN"} +{"text":"Finally , the PP attachment procedure has to be called again for the in and on PPs .","label":5,"label_text":"OWN"} +{"text":"Consider :","label":5,"label_text":"OWN"} +{"text":"We also smoothso as not to have zeros in positive or negative outcome probabilities :.","label":5,"label_text":"OWN"} +{"text":"The semantics of a motion complex is not the simple addition of the semantics of its constituents .","label":5,"label_text":"OWN"} +{"text":"They include tests for the existence of particular information , tests for the structure under creation and tests for the state of processing .","label":5,"label_text":"OWN"} +{"text":"If , for some reason , the system dropped the intention without satisfying it and the obligation were still current , the system would place them back on the stack .","label":6,"label_text":"OTH"} +{"text":"Note that this version of the Non-Local-Feature principle corresponds to the hypothetical reasoning mechanism which is provided by theLambekcategorial grammarsLambek 1958,Koenig 1994a.","label":5,"label_text":"OWN"} +{"text":"One reason is that it does not use any structural information of a language .","label":6,"label_text":"OTH"} +{"text":"As generating articles and number is only important when the rest of the sentence has been correctly generated , there has not been a lot of research devoted to it .","label":4,"label_text":"BKG"} +{"text":"Ruleis more general than rule.","label":5,"label_text":"OWN"} +{"text":"Recall that weights are initialised to 1.0 .","label":5,"label_text":"OWN"} +{"text":"An analysis for this sentence is available in the CCG framework by the addition of the xsubstitute combinatorSteedman, as defined inSteedman 1987.","label":6,"label_text":"OTH"} +{"text":"Then the \/\/ - valuebecomes [] .","label":5,"label_text":"OWN"} +{"text":"In general for any transition XY , where X is a category and Y a list of categories ( possibly empty ) , there will be a transition introducing coordination :.","label":5,"label_text":"OWN"} +{"text":"The control segments as defined would treat both of these cases as composed of 3 different segments .","label":5,"label_text":"OWN"} +{"text":"The syntactic VP could be copied down with its corresponding semantics , from which the semantics for the complete sentence can be derived .","label":5,"label_text":"OWN"} +{"text":"In the case of this example the seed looks as follows :","label":6,"label_text":"OTH"} +{"text":"I would argue that success at that task will require combining knowledge of the kind that WordNet provides , primarily about relatedness of meaning , with knowledge of the kind best provided by corpora , primarily about usage in context .","label":5,"label_text":"OWN"} +{"text":"A similar phenomenon appears to be taking place in the next set of sentences :","label":4,"label_text":"BKG"} +{"text":"If we want to construct a general theory of discourse than we want to know about the whole range of cues serving this function .","label":1,"label_text":"CTR"} +{"text":"For each, eachand each,is defined , andiff, and for some,.","label":5,"label_text":"OWN"} +{"text":"In practice , it makes sense to combine both types of thesauri .","label":5,"label_text":"OWN"} +{"text":"Using the chain rule on mutual informationCover and Thomas 1991, we can mathematically relate the different versions of Assoc ,","label":3,"label_text":"BAS"} +{"text":"In designing an agent to control the behavior of the dialogue manager , we choose a reactive approach in which the system will not deliberate and add new intentions until after it has performed the actions which are already intended .","label":5,"label_text":"OWN"} +{"text":"The acquisition of case frame patterns normally involves the following three subproblems :","label":4,"label_text":"BKG"} +{"text":"We evaluate the generality of this analysis by applying the control rules to 4 sets of dialogues , including both advisory dialogues ( ADs ) and task-oriented dialogues ( TODs ) .","label":5,"label_text":"OWN"} +{"text":"This is compatible with the centering framework since it is underspecified as to whether one should always choose to establish a discourse center with a co-specifier from a previous utterance .","label":6,"label_text":"OTH"} +{"text":"It is part of a discourse grammar implemented inCarpenter's ALE formalism .","label":3,"label_text":"BAS"} +{"text":"Other factors of interest are whether the dialogues are human-to-human or human-to-computer , as well as the modality of communication , e.g. spoken or typed , since some researchers have indicated that dialogues , and particularly uses of reference within them , vary along these dimensionsCohen 1984,Henisz Thompson 1980,Guindon et al. 1986,Dahlbach and Johnson 1989,Whittaker and Stenton 1989.","label":5,"label_text":"OWN"} +{"text":"Closely related is the idea of word identity , where the words are not counted the same unless all of their morphemes agree .","label":6,"label_text":"OTH"} +{"text":"The dataflow analysis is used to determine the relative efficiency of a particular evaluation order of the right-hand side categories in a phrase structure rule by computing the maximal degree of nondeterminacy introduced by the evaluation of each of these categories .","label":5,"label_text":"OWN"} +{"text":"In this section we evaluate the performance of the methodology implemented :","label":0,"label_text":"TXT"} +{"text":"For unknown words , smaller tagsets give higher accuracy .","label":5,"label_text":"OWN"} +{"text":"This behaviour cannot be captured whether we adopt a bottom-up or a top-down search for tree-lowering .","label":5,"label_text":"OWN"} +{"text":"The DRS in Figuredescribes the complex state, that after each event of John 's coming home , there is a sequence of subsequent events according to his activities .","label":5,"label_text":"OWN"} +{"text":"In our theory , this affected person plays a key role for semantics of complex sentence .","label":5,"label_text":"OWN"} +{"text":"We therefore use the sum of the number of fragments in the analysis as an additional feature .","label":5,"label_text":"OWN"} +{"text":"This suggests that , when considering the complexity of parsers , the issue of parse table size is of minor importance for realistic NL grammars ( as long as an implementation represents the table compactly ) , and that improvements to complexity results with respect to grammar size , although interesting from a theoretical standpoint , may have little practical relevance for the processing of natural language .","label":5,"label_text":"OWN"} +{"text":"In this paper , we use ` case slots ' to mean surface case slots , and we uniformly treat obligatory cases and optional cases .","label":5,"label_text":"OWN"} +{"text":"ABDICATION :","label":6,"label_text":"OTH"} +{"text":"However , the generation algorithm turns much simpler and hence more efficient .","label":5,"label_text":"OWN"} +{"text":"Moreover , the interchanging of arguments in recursive procedures as proposed byStrzalkowskifails to guarantee that input and output grammars are semantically equivalent .","label":1,"label_text":"CTR"} +{"text":"Furthermore , the programs are designed not to get caught in local optima , which is a problem since interesting alternative transcriptions may actually be local optima .","label":5,"label_text":"OWN"} +{"text":"These desiderata can be met by making use of a propositional language augmented with","label":5,"label_text":"OWN"} +{"text":"An action is forbidden if it is not permissible .","label":5,"label_text":"OWN"} +{"text":"The parametercontrols the relative contribution of words in different distances from: as the value ofincreases , the nearest words toget relatively more weight .","label":5,"label_text":"OWN"} +{"text":"Summarising the patterns :","label":5,"label_text":"OWN"} +{"text":"The implicit pronoun has been strictly identified with the pronoun in the antecedent to pick out the same referent , John .","label":4,"label_text":"BKG"} +{"text":"The similarity between words is a mapping:, where L is a set of words ( or lexicon ) .","label":5,"label_text":"OWN"} +{"text":"There will be a set of edges","label":5,"label_text":"OWN"} +{"text":"Undetermined TNCBs are commutative , e.g. they do not distinguish between the structures shown in Figure.","label":5,"label_text":"OWN"} +{"text":"In particular , the probability does not depend on the order in which the rules are applied .","label":6,"label_text":"OTH"} +{"text":"However , even if we fix both syntactic category and lexical meaning , we still get some weird coordinations .","label":4,"label_text":"BKG"} +{"text":"the information upon which the participants are basing their plans , and","label":5,"label_text":"OWN"} +{"text":"As well as this quantitative approach , we will consider a constraint \/ logic based approach and try to distinguish characteristics that we wish to preserve from those that are best replaced by statistical models .","label":5,"label_text":"OWN"} +{"text":"Once the stack decoder has found a complete parse of reasonable probability () , it switches to a breadth-first mode to pursue all of the partial parses which have not been explored by the stack decoder .","label":5,"label_text":"OWN"} +{"text":"Resnik 1993also uses a local normalization technique but he normalizes by the total number of classes in the hierarchy .","label":6,"label_text":"OTH"} +{"text":"The parallel which makes it possible to apply the PCFG training scheme almost unchanged is that the sub-types of a given super-type partition the feature structures of that type in just the same way that the different rules which expand a given non-terminal N of the PCFG partition the space of trees whose topmost node is N .","label":5,"label_text":"OWN"} +{"text":"Similarly , some French verbs whose infinitives end in - eler take a grave accent on the first e in the third person singular future ( modeler , `` model '' , becomes modlera ) , while others double the l instead ( e.g. appeler , `` call '' , becomes appellera ) .","label":5,"label_text":"OWN"} +{"text":"Consider the simple grammar in figureand its training against the corpus in figure.","label":6,"label_text":"OTH"} +{"text":"A subsumption ordering over QLFS ,, is employed in the evaluation rules , in effect to propose possible instantiations for meta-variables ( the rule fragment only allows for scope meta-variables , butCooper et al. 1994adescribes the more general case where other kinds of meta-variable are permitted ) .","label":5,"label_text":"OWN"} +{"text":"An intra-paragraph structure is a structure whose representation units are sentences , and an inter-paragraph structure is a structure whose representation units are paragraphs .","label":5,"label_text":"OWN"} +{"text":"describes a complex tense clause anddescribes a complex tense event , or","label":5,"label_text":"OWN"} +{"text":"The Head Feature Principle just unifies two variables , so that it can be executed at compile time and need not be called as a goal at runtime .","label":5,"label_text":"OWN"} +{"text":"Since this information is not referenced until later , the COMP feature is used to limit the number of superfluous proper-branches generated by the parser .","label":5,"label_text":"OWN"} +{"text":"The following function determines the set of cutnodes N that either exceed the entropy threshold , or are induced by structural equivalence :","label":5,"label_text":"OWN"} +{"text":"For example :","label":6,"label_text":"OTH"} +{"text":"Let us see why this is so .","label":5,"label_text":"OWN"} +{"text":"It consists of two parts , the constraint-based portion and the preference-based portion :","label":5,"label_text":"OWN"} +{"text":"Joe 's statement is based on his prior beliefs .","label":5,"label_text":"OWN"} +{"text":"Whichever is found is then chosen as the more closely bracketed pair .","label":6,"label_text":"OTH"} +{"text":"The discussion so far should have given the reader some idea of how to specify LFG grammars using.","label":2,"label_text":"AIM"} +{"text":"For instance , in our test corpus , we find expressions like en 225 pages , leur tour , ces postes and pour les postes de responsabilit for which the contextual analysis does not help to disambiguate the gender of the head noun .","label":5,"label_text":"OWN"} +{"text":"For ambiguous words , the pattern and accuracy were similar to first experiment .","label":5,"label_text":"OWN"} +{"text":"There are various technical difficulties withGoodall's accountvan Oirsouw 1987,Moltmann 1992.","label":1,"label_text":"CTR"} +{"text":"A string of words is a sentence if it has the type , ccwhere cand care appropriate initial and final states for a parse .","label":6,"label_text":"OTH"} +{"text":"Feature constraints , and cases where the rules will not apply if those constraints are broken , are shown .","label":5,"label_text":"OWN"} +{"text":"The sentences in Wheels are short and simple with long sequences consisting of reported conversation , so it is similar to a conversational text .","label":5,"label_text":"OWN"} +{"text":"Examples are word doubling and omission of a common function word .","label":5,"label_text":"OWN"} +{"text":"In this paper we examine the usefulness of these distance vectors as semantic representations by comparing them with co-occurrence vectors .","label":2,"label_text":"AIM"} +{"text":"Vowel and Diacritic Shifts","label":4,"label_text":"BKG"} +{"text":"This instantiates the inference rule for understanding as follows :","label":5,"label_text":"OWN"} +{"text":"Members of the surface and lexical strings may be characters or classes of single characters .","label":5,"label_text":"OWN"} +{"text":"The works ofPruest 1992andAsher 1993provide analyses of VP-ellipsis in the context of an account of discourse structure and coherence .","label":6,"label_text":"OTH"} +{"text":"In our analysis we argued for hierarchical organization of the control segments on the basis of specific examples of interruptions .","label":5,"label_text":"OWN"} +{"text":"The description ofSussna's algorithm for disambiguating noun groupings like this one is similar to the one proposed here , in a number of ways : relatedness is characterized in terms of a semantic network ( specifically WordNet ) ; the focus is on nouns only ; and evaluations of semantic similarity ( or , inSussna's case , semantic distance ) are the basis for sense selection .","label":3,"label_text":"BAS"} +{"text":"Let M be the result of substituting all instances of N in L with R .","label":5,"label_text":"OWN"} +{"text":"In such cases , we observe that the test instance itself provides the information that the eventcan occur and we recalculate the ratio usingfor all possible categorieswhere k is any non-zero constant .","label":5,"label_text":"OWN"} +{"text":"This treatment presupposes the choice of a ` dummy ' verb , which at least subcategorizes a subject and has active voice .","label":5,"label_text":"OWN"} +{"text":"Therefore , if a ( statistical ) tagger is not able to use the relevant context , it may produce some extra errors by using the gender .","label":5,"label_text":"OWN"} +{"text":"This allows the evaluation of the filler right after the evaluation of the auxiliary verb , but prior to the subject .","label":5,"label_text":"OWN"} +{"text":"This is the final incarnation of the formalism , being the State-Transition Grammar of the title .","label":5,"label_text":"OWN"} +{"text":"We won't distinguish the SRs imposed by verbs on arguments and adjuncts .","label":6,"label_text":"OTH"} +{"text":"Consider the goal of adding a discourse component to a system , or evaluating and improving one that is already in place .","label":4,"label_text":"BKG"} +{"text":"For instance , dort ( present ; sleeps ) and dormira ( future ; will sleep ) have the same tag VERB-SG-P 3 , because they are both singular , third-person forms and they can both be the main verb of a clause .","label":5,"label_text":"OWN"} +{"text":"The crucial data structure that it employs is the TNCB .","label":5,"label_text":"OWN"} +{"text":"The factor of 1000 is intended to scale the energy distribution to typical values of the evaluation function .","label":6,"label_text":"OTH"} +{"text":"Syntactic natural language parsers have shown themselves to be inadequate for processing highly-ambiguous large-vocabulary text , as is evidenced by their poor performance on domains like the Wall Street Journal , and by the movement away from parsing-based approaches to text-processing in general .","label":1,"label_text":"CTR"} +{"text":"Generally , a decision during generation influences other decisions all over the system .","label":5,"label_text":"OWN"} +{"text":"Otherwise traverse the surface parse trees of previous sentences in the text in reverse chronological order until an acceptable antecedent is found ; each tree is traversed in a left-to-right , breadth-first manner , and when an NP node is encountered , it is proposed as the antecedent .","label":6,"label_text":"OTH"} +{"text":"D2 Lexical probabilities are proportional to the overall tag frequencies , and are hence independent of the actual occurrence of the word in the training corpus .","label":5,"label_text":"OWN"} +{"text":"The relative efficiency of this evaluation leads our compiler to choose","label":5,"label_text":"OWN"} +{"text":"A German language model for the Xerox HMM tagger is presented .","label":2,"label_text":"AIM"} +{"text":"For our analysis of gapping , we followSag 1976in hypothesizing that a post-surface-structure level of syntactic representation is used as the basis for interpretation .","label":3,"label_text":"BAS"} +{"text":"The hand tagging of these corpora is quite different .","label":5,"label_text":"OWN"} +{"text":"We extend the general algorithm for least-errors recognition to adopt it as the recovery mechanism in our robust parser .","label":3,"label_text":"BAS"} +{"text":"We found that each dialogue could be divided into two parts separated by a topic shift which we labelled the central shift .","label":5,"label_text":"OWN"} +{"text":"We presented a technique for fully unsupervised statistical acquisition of rules which guess possible parts-of-speech for words unknown to the lexicon .","label":2,"label_text":"AIM"} +{"text":"The hearer needs to be confident that the description will be adequate as a means of identifying the referent , but because of the inevitable differences in beliefs about the world , he might not be .","label":4,"label_text":"BKG"} +{"text":"Although top-down and bottom-up presentation activities are of a conceptually different nature , the corresponding communication knowledge is uniformly encoded as presentation operators in a planning framework , similar to the plan operators in other generation systemsHovy 1988,Moore 1989,Dale 1992,Reithinger 1991.","label":3,"label_text":"BAS"} +{"text":"This first involves deleting TNCB 4 ( noting it ) , and raising node 3 to replace node 2 .","label":5,"label_text":"OWN"} +{"text":"Finally , it would be interesting to only use the information related to the selectional behavior of, i.e. comparing the conditional probabilities of c andgivenwith the corresponding marginals .","label":5,"label_text":"OWN"} +{"text":"The accuracy of the taggers on the set of 347 unknown words when they were made known to the lexicon was detected at 98.5 % for both taggers .","label":5,"label_text":"OWN"} +{"text":"However , we do not impose this restriction because it still leaves open the possibility of generating trees in which every branch has the same length , thus violating the condition that trees have at most a bounded number of unbounded , dependent branches .","label":5,"label_text":"OWN"} +{"text":"Most psycholinguistic experimentation has been concerned with which scope preferences are made , rather than the point at which the preferences are establishedKurtzman and MacDonald 1993.","label":1,"label_text":"CTR"} +{"text":"Similarly , verb neologisms belong to the regular conjugation paradigm characterised by the infinitive ending er , e.g. dballaduriser .","label":5,"label_text":"OWN"} +{"text":"Assume for example that a physical system can be in any of N states , and that it will be in statewith probability.","label":4,"label_text":"BKG"} +{"text":"However , clusters do not improve trigram modeling at all .","label":5,"label_text":"OWN"} +{"text":"Verb and noun are the lemmas of the inflected forms appearing in text .","label":5,"label_text":"OWN"} +{"text":"If a binary branching formalism is employed , or indeed any formalism where the arguments of an item and the item itself are not necessarily all sisters , the problem of when to access the probability of a theta application is presented .","label":5,"label_text":"OWN"} +{"text":"The actual formalism used was much fuller than the rather schematic one given above , including many additional features such as case , tense , person and number .","label":5,"label_text":"OWN"} +{"text":"Every non-head nonterminal leaf of a local tree must come with a ( possibly empty ) multiset of syntax-semantics pairs as the value of its to _ bind : slash-feature ( feature abbreviated as \/ ) , cf. example.","label":5,"label_text":"OWN"} +{"text":"Mercer 1987formalizes presuppositions in a logical framework that handles defaultsReiter 1980, but this approach is not tractable and it treats natural disjunction as an exclusive-or and implication as logical equivalence .","label":1,"label_text":"CTR"} +{"text":"Similarly , threading from the antecedents of conditionals into the consequent fails for examples such as :","label":6,"label_text":"OTH"} +{"text":"Move 3 : Create a rule of the form","label":5,"label_text":"OWN"} +{"text":"InMilward 1992, the dynamics specifies a word-by-word incremental parser for a lexicalised version of dependency grammar .","label":5,"label_text":"OWN"} +{"text":"When the situation requires the negotiation of a collaborative plan , these theories must account for the interacting beliefs and intentions of multiple participants .","label":4,"label_text":"BKG"} +{"text":"But in that case , it appears that checking the applicability of a production at some point in a derivation must entail the comparison of structures of unbounded size .","label":5,"label_text":"OWN"} +{"text":"Most parsers which work left to right along an input string can be described in terms of state transitions i.e. by rules which say how the current parsing state ( e.g. a stack of categories , or a chart ) can be transformed by the next word into a new state .","label":5,"label_text":"OWN"} +{"text":"is a conjunction of two pairs of noun phrases .","label":4,"label_text":"BKG"} +{"text":"This is the amount of available energy for transitions to higher energy states .","label":6,"label_text":"OTH"} +{"text":"These recognition hypotheses are passed to a parser which applies a logic-based grammar and lexicon to produce a set of logical forms , specifically formulas in first order logic corresponding to possible interpretations of the utterance .","label":5,"label_text":"OWN"} +{"text":"On the other hand , if a suggestion was made , she could instead attempt to expand the plan by affirming or denying the attribute suggested .","label":5,"label_text":"OWN"} +{"text":"This is done by assigning an entropy value to each node in the parse trees and cutting in the nodes with sufficiently high entropy values .","label":5,"label_text":"OWN"} +{"text":"Therefore , such forms are anaphoric in the semantics , but do not leave behind an empty constituent in the syntax .","label":5,"label_text":"OWN"} +{"text":"Semantic relation :","label":4,"label_text":"BKG"} +{"text":"This is often not the case , especially for preposition + noun sequences and for plural forms , as plural determiners themselves are often ambiguous with respect to gender .","label":5,"label_text":"OWN"} +{"text":"says that I did steal something which belongs to you ,says that I stole somebody 's car , but not yours , andsays that I did do something to your car ( I probably borrowed it , though that is not entailed by) .","label":4,"label_text":"BKG"} +{"text":"We will show elsewhere that the theoretical analysis outlined here applies to that more general problem , but for now we will only address the more specific problem in which the objects are nouns and the contexts are verbs that take the nouns as direct objects .","label":2,"label_text":"AIM"} +{"text":"We address this topic in describing a novel approach to HPSGPollard and Sag 1994based language processing that uses an off-line compiler to automatically prime a declarative grammar for generation or parsing , and hands the primed grammar to an advanced Earley processor .","label":2,"label_text":"AIM"} +{"text":"The three readings of book are illustrated below , listing substitutions to be applied to the antecedent and cashing out the results of their application , though omitting scope .","label":5,"label_text":"OWN"} +{"text":"The model","label":5,"label_text":"OWN"} +{"text":"In the case of our example , this would be the steps from edge 2 to edge 3 and edge 3 to edge 4 .","label":5,"label_text":"OWN"} +{"text":"Evaluation of the resulting DESIGN-HOUSE plan is parametrized by","label":5,"label_text":"OWN"} +{"text":"However , in our work we do not wish to limit the size of the grammars considered .","label":1,"label_text":"CTR"} +{"text":"The grammar is divided into modules which filter out ungrammatical structures at the various levels of representation ; these levels are related by general transformations .","label":6,"label_text":"OTH"} +{"text":"This is clearly undesirable .","label":5,"label_text":"OWN"} +{"text":"If our input ` sentence ' now is the definition of trans\/3 as given above , we obtain the following parse forest grammar ( where the start symbol is","label":5,"label_text":"OWN"} +{"text":"The accuracy on the test set for all these experiments is shown in figure.","label":5,"label_text":"OWN"} +{"text":"Now consider individual transitions .","label":5,"label_text":"OWN"} +{"text":"Assert P.","label":6,"label_text":"OTH"} +{"text":"As a result of the structure-sharing between the left-hand side of the rule and the auxiliary verb category , the - value of the auxiliary verb can be treated as bound , as well .","label":5,"label_text":"OWN"} +{"text":"The research reported is in a similar vein to that of , for example ,Moore and Dowding 1991,Samuelsson and Rayner 1991, andMaxwell and Kaplan 1993, in that it relies on empirical results for the study and optimisation of parsing algorithms rather than on traditional techniques of complexity analysis .","label":3,"label_text":"BAS"} +{"text":"This skeleton is obtained by removing the constraints from each of the grammar rules .","label":5,"label_text":"OWN"} +{"text":"He goes on to say , `` just how far information in the sound structure of the input can bootstrap the acquisition of other levels [ of linguistic organization ] remains to be determined . ''","label":6,"label_text":"OTH"} +{"text":"For example , English inflectional morphology is relatively simple ; dimensionsandare fairly small , so if, the lexicon , is known in advance and is of manageable size , then the entire task of morphological analysis can be carried out at compile time , producing a list of analysed word forms which need only be looked up at run time , or a network which can be traversed very simply .","label":4,"label_text":"BKG"} +{"text":"Next , from these sets of guessing rules we need to cut out infrequent rules which might bias the further learning process .","label":5,"label_text":"OWN"} +{"text":"The prototype accessible via the Internet has been trained on sentences from the technical manuals , slightly augmented .","label":5,"label_text":"OWN"} +{"text":"Given assignments ofat all levels of abstraction , one obvious method of semantic annotation is to assign the highest-level concept for whichis at least as large as the sense-specific value of.","label":5,"label_text":"OWN"} +{"text":"whereis the length of the description of the grammar in bits .","label":6,"label_text":"OTH"} +{"text":"Therefore , once disambiguated the verb senses it would be possible to split the set of SRs acquired .","label":5,"label_text":"OWN"} +{"text":"We expand the ambiguity to 15 possible structures .","label":5,"label_text":"OWN"} +{"text":"Since guessing rules are meant to capture general language regularities the lexicon should be as general as possible ( list all possible POS s for a word ) and as large as possible .","label":5,"label_text":"OWN"} +{"text":"Japanese nouns have no equivalent to the English singular and plural forms and verbs do not inflect to agree with the number of the subjectKuno 1973.","label":4,"label_text":"BKG"} +{"text":"Prompts - These were utterances which did not express propositional content .","label":5,"label_text":"OWN"} +{"text":"Using the WordNet hierarchy as a source of backing-off knowledge , in such a way that if n-grams composed by c aren't enough to decide the best sense ( are equal to zero ) , the tri-grams of ancestor classes could be used instead .","label":5,"label_text":"OWN"} +{"text":"It provides a learnable lexical selection sub-system for a connectionist transfer project in machine translation .","label":5,"label_text":"OWN"} +{"text":"Downdrift is the automatic lowering of the second of two H tones when an L intervenes , so HLH is realised asrather than as, while downstep is the lowering of the second of two tones when an intervening L is lost , so HH is realised asHyman and Schuh 1974.","label":4,"label_text":"BKG"} +{"text":"In particular , note that the Fdecay seems to be to a non-zero asymptote , and that H and L appear to have different asymptotes which we symbolise as h and l respectively .","label":5,"label_text":"OWN"} +{"text":"It does not make sense to fix absolute values for the retrieval , inference and communication cost parameters in relation to human processing .","label":5,"label_text":"OWN"} +{"text":"A simple calculation shows that using their own preprocessing hueristics to guess a bracketing provides a higher accuracy on their test set than their statistical model does .","label":1,"label_text":"CTR"} +{"text":"since any \/ connective in U needs to be introduced via.","label":5,"label_text":"OWN"} +{"text":"What are the linguistic characteristics of summaries ?","label":5,"label_text":"OWN"} +{"text":"WhileHindle and Rooth 1993use a partial parser to acquire training data , such machinery appears unnecessary for noun compounds .","label":1,"label_text":"CTR"} +{"text":"The similarity model seems to be able to model better regularities such as semantic parallelism in lists and avoiding a past tense form after `` to '' .","label":5,"label_text":"OWN"} +{"text":"In fact , it appears that gapping is felicitous in those constructions where VP-ellipsis requires a syntactic antecedent , whereas gapping is infelicitous in cases where VP-ellipsis requires only a suitable semantic antecedent .","label":4,"label_text":"BKG"} +{"text":"Obligations ( or at least beliefs that the agent has obligations ) will thus form an important part of the reasoning process of a deliberative agent , e.g. , the architecture proposed byBratman et al. 1988.","label":5,"label_text":"OWN"} +{"text":"We have given an account of resource sharing in the syntax \/ semantics interface of LFG .","label":2,"label_text":"AIM"} +{"text":"whileis not empty .","label":5,"label_text":"OWN"} +{"text":"So ifis the root of a tree C , we have","label":5,"label_text":"OWN"} +{"text":"For such cases , the correct segmentation can be uniquely identified by applying the structure analysis for 7 , 19 , and 17 cases , and the correct structure can be uniquely identified for 7 , 10 , and 8 cases for all collections of test data by using.","label":5,"label_text":"OWN"} +{"text":"Thus , to obtain the LF for John and Bill , the following query would be made :","label":5,"label_text":"OWN"} +{"text":"adjective vs. verb ( appliqu , devenu , fabriqu ) ,","label":5,"label_text":"OWN"} +{"text":"Note that we use garu as a value of the relation feature meant by ` rel ' .","label":5,"label_text":"OWN"} +{"text":"The resulting specialized grammar was compiled into LR parsing tables , and a special LR parser exploited their special propertiesSamuelsson 1994b.","label":6,"label_text":"OTH"} +{"text":"This indexing is used if only adjacent constituents can be combined , but the order of combination is not prescribed ( e.g. non-directional basic categorial grammars ) .","label":5,"label_text":"OWN"} +{"text":"In this section I present the problem of relating sequences of Fvalues to tone transcriptions .","label":0,"label_text":"TXT"} +{"text":"A decision tree is a decision-making device which assigns a probability to each of the possible choices based on the context of the decision :, where f is an element of the future vocabulary ( the set of choices ) and h is a history ( the context of the decision ) .","label":6,"label_text":"OTH"} +{"text":"More specifically , 'magic generation ' falls prey to non-termination in the face of head recursion , i.e. , the generation analog of left recursion in parsing .","label":1,"label_text":"CTR"} +{"text":"First , we define a noun partitionover a given set of nounsand a verb partionover a given set of verbs.","label":5,"label_text":"OWN"} +{"text":"captures the negotiation process in a recursive Propose-Evaluate-Modify cycle of actions , thus enabling the system to handle embedded negotiation subdialogues .","label":5,"label_text":"OWN"} +{"text":"It seems likely that what is going on is that the model is converging to towards something of similar `` quality '' in each case , but when the pattern is classical , the convergence starts from a lower quality model and improves , and in the other cases , it starts from a higher quality one and deteriorates .","label":5,"label_text":"OWN"} +{"text":"Sectionshows applications of the similarity measure -- computing similarity between texts , and measuring coherence of a text .","label":0,"label_text":"TXT"} +{"text":"As the subject of the copula that is countable it 's complement is judged to be denumerated by the proposed method .","label":5,"label_text":"OWN"} +{"text":"Lemma 13 .","label":5,"label_text":"OWN"} +{"text":"Completeness in this context means : the parse forest grammar contains all possible parses .","label":5,"label_text":"OWN"} +{"text":"where d is the distance between two words and q ( d ) is the probability when two words of said distance is d and have a modification relation .","label":5,"label_text":"OWN"} +{"text":"The performance of both implementations is evaluated and compared on a range of artificial and real data .","label":5,"label_text":"OWN"} +{"text":"Generally , the guesser does not recognise words belonging to closed classes ( conjunctions , prepositions , etc. ) under the assumption that closed classes are fully described in the basic lexicon .","label":5,"label_text":"OWN"} +{"text":"A promising way of generating contours from tone sequences is to specify one or more pitch targets per tone and then to interpolate between the targets ; the task then becomes one of providing a suitable sequence of targetsPierrehumbert and Beckman 1988.","label":4,"label_text":"BKG"} +{"text":"To the above , one adds language-independent issues in spell checking such as the fourDamerautransformations : omission , insertion , transposition and substitutionDamerau 1964.","label":6,"label_text":"OTH"} +{"text":"lhip_phrase(+C,+S,-B,-E,-Cov)","label":5,"label_text":"OWN"} +{"text":"Other mental actions in the intermediate plans add up the confidence values of the attributes , and a final constraint makes sure that the sum exceeds the agent 's confidence threshold .","label":5,"label_text":"OWN"} +{"text":"However , there is a corresponding problem of far greater non-determinism , with even unambiguous words allowing many possible transitions .","label":1,"label_text":"CTR"} +{"text":"This is illustrated in, adaptingHyman's earlier notationHyman 1979.","label":5,"label_text":"OWN"} +{"text":"This mechanism is used to explain how tense and temporal adverbials can combine to temporally locate the occurrence , without running into problems of relative scopeHinrichs 1988.","label":6,"label_text":"OTH"} +{"text":"Both of the taggers come with data and word-guessing components pre-trained on the Brown Corpus .","label":6,"label_text":"OTH"} +{"text":"The realization simply implements the previous ideas .","label":5,"label_text":"OWN"} +{"text":"Or choose an element of the \/\/ - valueof the current head.","label":5,"label_text":"OWN"} +{"text":"To gain an intuitive understanding of our model , consider the following speech sample ( transcription is in IPA ) :","label":4,"label_text":"BKG"} +{"text":"The main problem is that the preposition de , comparable to English of , is the most common preposition and also has a specific distribution .","label":5,"label_text":"OWN"} +{"text":"It is important to notice that the types of disambiguation carried out by the tagger for German are significantly different from the disambiguation work for English and French .","label":5,"label_text":"OWN"} +{"text":"This can be done , and the method which we adopt has the merit of simplicity .","label":5,"label_text":"OWN"} +{"text":"We have developed a set of triggers for each move in our move set , and only consider a specific move if it is triggered in the sentence currently being parsed in the incremental processing .","label":5,"label_text":"OWN"} +{"text":"Since the temporal connective in this sentence is before , the relation between these two markers is one of precedence .","label":5,"label_text":"OWN"} +{"text":"On the technical manuals the constraints of the grammatic framework put up to 6 % of declarative sentences outside our system , most commonly because the pre-subject is too long .","label":5,"label_text":"OWN"} +{"text":"Create a new predicate magic_p for each predicate p in P .","label":6,"label_text":"OTH"} +{"text":"We could smooth the estimated probabilities using an existing smoothing techniqueDagan et al. 1992,Gale and Church 1990, then calculate some similarity measure using the smoothed probabilities , and then cluster words according to it .","label":6,"label_text":"OTH"} +{"text":"In each case , the hearer is to understand the relation by inferringfrom sentenceand inferringfrom sentenceunder the listed constraints .","label":5,"label_text":"OWN"} +{"text":"For example :","label":5,"label_text":"OWN"} +{"text":"In passage, the times evoked by the simple pasts are further ordered by the Explanation relation indicated by because , resulting in the backward progression of time .","label":5,"label_text":"OWN"} +{"text":"The possibility of error occurrence within noun phrases are lower than between a noun phrase and a verbal phrase , a preposition phrase , an adverbial phrase .","label":5,"label_text":"OWN"} +{"text":"If the s rule in the running example is not optimized , the resulting processing behavior would not have fallen out so nicely :","label":5,"label_text":"OWN"} +{"text":"Proposalsandare inferred to be implicitly ACCEPTED because they are not rejectedWalker and Whittaker 1990,Walker 1992.","label":6,"label_text":"OTH"} +{"text":"However , in practice , the more the level of depth of recursively appearing ` Comment ' is , the less comprehensible the sentence is .","label":1,"label_text":"CTR"} +{"text":"Unlike previous work , the algorithm categorizes word tokens in context instead of word types .","label":1,"label_text":"CTR"} +{"text":"In essence , pairs of trees are just a graphical notation for what has been put forward as the ` rule-to-rule ' - hypothesis , cf.Gazdar et al. 1985, the fact that in the grammar each syntax rule is related with a semantic analysis rule .","label":4,"label_text":"BKG"} +{"text":"For more discussion of the use of binary decision-tree questions , seeMagerman 1994.","label":6,"label_text":"OTH"} +{"text":"The first heuristic prefers evidence in which the system is most confident since high-quality evidence produces more attitude change than any other evidence formLuchok and McCroskey 1978.","label":3,"label_text":"BAS"} +{"text":"Although the hand-coded tree-cutting criteria are substantially better than the induced ones , we must remember that the former produce a grammar that in median allows 60 times faster processing than the original grammar and parser do .","label":5,"label_text":"OWN"} +{"text":"However , the equivalent of this kind of minimal types in untyped feature structure grammars are constants which can be used in a similar fashion for off-line optimization .","label":5,"label_text":"OWN"} +{"text":"Katzthus assumes that for a given conditioning wordthe probability of an unseen following wordis proportional to its unconditional probability .","label":6,"label_text":"OTH"} +{"text":"The lattice in figureunderlies the semantics of stratified logic .","label":5,"label_text":"OWN"} +{"text":"If a grammar is augmented with operations which are powerful enough to make most initial fragments constituents , then there may be unwanted interactions with the rest of the grammar ( examples of this in the case of CCG and the Lambek Calculus are given in Section) .","label":1,"label_text":"CTR"} +{"text":"A noun partition is any setsatisfying,and.","label":5,"label_text":"OWN"} +{"text":"very few rules license both left and right recursion ( for instance of the sort that is typically used to analyse noun compounding , i.e.) .","label":5,"label_text":"OWN"} +{"text":"A fully specified denotation of the meaning of a sentence is rarely required for translation , and as we pointed out when discussing logic representations , a complete specification may not have been intended by the speaker .","label":5,"label_text":"OWN"} +{"text":"Second , we wished to test the effectiveness of our evaluation heuristics in selecting the best parse .","label":5,"label_text":"OWN"} +{"text":"The dependency model has also been proposed byKobayasi et al. 1994for analysing Japanese noun compounds , apparently independently .","label":6,"label_text":"OTH"} +{"text":"Step 2 .","label":5,"label_text":"OWN"} +{"text":"Each reference marker is contraindexed with expressions with which it cannot co-specify .","label":6,"label_text":"OTH"} +{"text":"Methods described inmust be implemented to improve results in this case .","label":5,"label_text":"OWN"} +{"text":"We define, the similarity-based model for the conditional distribution of, as a weighted average of the conditional distributions of the words in:","label":5,"label_text":"OWN"} +{"text":"As for the technical papers , they were 60 % and 80 % respectively .","label":5,"label_text":"OWN"} +{"text":"When we consider full sentence processing , as opposed to incremental processing , the use of lexicalised grammars has a major advantage over the use of more standard rule based grammars .","label":5,"label_text":"OWN"} +{"text":"Our methodology is derived from that described byLari 1990.","label":5,"label_text":"OWN"} +{"text":"For this system , the main goals are that an executable plan which meets the user 's goals is constructed and agreed upon by both the system and the user and then that the plan is executed .","label":6,"label_text":"OTH"} +{"text":"Clearly phonotactic constraints are useful , as both recall and accuracy improve .","label":5,"label_text":"OWN"} +{"text":"In grammar formalisms like DCG or HPSG , the complex nonterminals have an argument or a feature ( PHON ) that represents the covered substring explicitly .","label":6,"label_text":"OTH"} +{"text":"The required values of m may be calculated similarly with reference to the left contexts of rules .","label":5,"label_text":"OWN"} +{"text":"Parameters are set to reflect the frequency of the corresponding rule in the parsed corpus .","label":5,"label_text":"OWN"} +{"text":"One view on these cases may be that these are not discourse anaphora , but there seems to be no principled way to make this distinction .","label":5,"label_text":"OWN"} +{"text":"These auxiliary verbs as well as ordinary verbs can dominate some cases so that these agglutinations may change the whole syntaxGunji 1987.","label":4,"label_text":"BKG"} +{"text":"Our definitions are given with respect to a signature of the form, where Cat , Atom and Feat are non-empty , finite or denumerably infinite sets .","label":5,"label_text":"OWN"} +{"text":"A context size ( x-axis ) of , for example , 10 means 10 words before the target word and 10 words after the target word .","label":5,"label_text":"OWN"} +{"text":"We can benefit in two ways from performing such evaluations :","label":5,"label_text":"OWN"} +{"text":"( Using only endings seems to be a possible way )","label":5,"label_text":"OWN"} +{"text":"The algorithm then hypothesizes that the user has changed his mind about each belief in cand-set and predicts how this will affect the user 's belief about _bel ( step) .","label":5,"label_text":"OWN"} +{"text":"The similarity is measured on a semantic network constructed systematically from a subset of the English dictionary , LDOCE ( Longman Dictionary of Contemporary English ) .","label":5,"label_text":"OWN"} +{"text":"A heuristic word-based method for disambiguation , in which the random variables ( case slots ) are assumed to be dependent , is to calculate the following values of word-based likelihood and to select the interpretation corresponding to the higher likelihood value .","label":5,"label_text":"OWN"} +{"text":"This behaviour is clearly unmotivated by the corpus , and arises purely because of the inadequacy of the probabilistic model .","label":1,"label_text":"CTR"} +{"text":"Another example is connection strengths in neural network approaches to language processing , though it has been shown that certain networks are effectively computing probabilitiesRichard and Lippmann 1991.","label":5,"label_text":"OWN"} +{"text":"This is best illustrated on the basis of the following , more ` schematic ' , phrase structure rule :","label":5,"label_text":"OWN"} +{"text":"The connectives ` , ' and ` ; ' have the same precedence as in Prolog , while ` : ' has the same precedence as ` , ' .","label":5,"label_text":"OWN"} +{"text":"Obviously , such cases fall outside of the purview of the coordination schema .","label":1,"label_text":"CTR"} +{"text":"Measurements were taken from the following data .","label":5,"label_text":"OWN"} +{"text":"In the parse-forest grammar , complex symbols are non-terminals , atomic symbols are terminals .","label":5,"label_text":"OWN"} +{"text":"In order to identify structures of a compound noun , we must first find a set of words that compose the compound noun .","label":4,"label_text":"BKG"} +{"text":"For example the rules sanction both \/katab\/ ( M 1 , active ) and \/kutib\/ ( M 1 , passive ) as interpretations ofktbas showin in.","label":5,"label_text":"OWN"} +{"text":"This approach is exemplified by Combinatory Categorial Grammar , CCGSteedman 1991, which takes a basic CG with just application , and adds various new ways of combining elements together .","label":6,"label_text":"OTH"} +{"text":"This approach is learnable .","label":6,"label_text":"OTH"} +{"text":"In the second approach , the structure is :","label":6,"label_text":"OTH"} +{"text":"In this second mode , it can safely discard any partial parse which has a probability lower than the probability of the highest probability completed parse .","label":5,"label_text":"OWN"} +{"text":"Definition 7 .","label":5,"label_text":"OWN"} +{"text":"If the subtraction results in an non-empty string it creates a morphological rule by storing the POS - class of the shorter word as the I-class and the POS - class of the longer word as the R-class .","label":5,"label_text":"OWN"} +{"text":"add a capability to a system that it didn't previously have ,","label":4,"label_text":"BKG"} +{"text":"Finally , we implemented a Japanese language understanding system based on the theory we state in this paper , but due to the space limitation we will report the detail of implementation in other place in the near future .","label":5,"label_text":"OWN"} +{"text":"The cascading guesser outperformed the guesser supplied with the Xerox tagger by about 8 - 9 % and the guesser supplied withBrill's tagger by about 6 - 7 % .","label":1,"label_text":"CTR"} +{"text":"If the system does not get an acknowledgement , it will request acknowledgement the next time it considers the grounding situation .","label":5,"label_text":"OWN"} +{"text":"The learning is implemented as a two-staged process with feedback .","label":5,"label_text":"OWN"} +{"text":"Other compounds exhibited whatHindle and Rooth 1993have termed SEMANTIC INDETERMINACY where the two possible bracketings cannot be distinguished in the context .","label":5,"label_text":"OWN"} +{"text":"Using this scheme , they predict which unobserved cooccurrences are more likely than others .","label":6,"label_text":"OTH"} +{"text":"For each such belief , the system could provide evidence against the belief itself , address the unaccepted evidence proposed by the user to eliminate the user 's justification for the belief , or both .","label":5,"label_text":"OWN"} +{"text":"In other words ,may only do what it is supposed to do : extraction , and we can directly read off the category assignment which extractions there will be .","label":5,"label_text":"OWN"} +{"text":"it does not consist of an infinite list of statements .","label":1,"label_text":"CTR"} +{"text":"If one were to include all subsuming concepts for each word , rather than just the synsets of which they are directly members , the concepts with non-zero values ofwould be as follows :","label":5,"label_text":"OWN"} +{"text":"That the lookup relation and the indexing scheme satisfy this property must be shown for particular grammar formalisms .","label":5,"label_text":"OWN"} +{"text":"We model the strength of a belief using endorsements , which are explicit records of factors that affect one 's certainty in a hypothesisCohen 1985, followingGalliers 1992,Logan et al. 1994.","label":3,"label_text":"BAS"} +{"text":"The incoherence of exampleis predicted by both their and our accounts by virtue of the fact that there is no coherence relation that corresponds to Narration with reverse temporal ordering .","label":6,"label_text":"OTH"} +{"text":"Definition 4 .","label":5,"label_text":"OWN"} +{"text":"The main advantage of the Xerox tagger when compared with earlier implementations of HMM taggers is that it can be trained using untagged text .","label":6,"label_text":"OTH"} +{"text":"The basic construction is just the same as in the paradigm structure , but now we have narrative progression in the consequent box .","label":5,"label_text":"OWN"} +{"text":"A node includes the following features ( among others ) :","label":5,"label_text":"OWN"} +{"text":"For instance , the experiencer is permitted to do something by the motivated .","label":5,"label_text":"OWN"} +{"text":"is , in GB term , [-anaphoric,+pronominal ] or pro .","label":4,"label_text":"BKG"} +{"text":"We assume a sign-based grammar with binary rules , each of which may be used to combine two signs by unifying them with the daughter categories and returning the mother .","label":5,"label_text":"OWN"} +{"text":"The final section contrasted parsing with lexicalised and rule based grammars , and argued that statistical language tuning is particularly suitable for incremental , lexicalised parsing strategies .","label":5,"label_text":"OWN"} +{"text":"As n grows , the parameter space for an n-gram model grows exponentially , and it quickly becomes computationally infeasible to estimate the smoothed model using deleted interpolation .","label":6,"label_text":"OTH"} +{"text":"To exemplify this point , it is worth considering a recent example where an alternative transcription of some data proved valuable in providing a fresh analysis of the data .","label":4,"label_text":"BKG"} +{"text":"Using constraints , we reduce the number of readings to 4 .","label":5,"label_text":"OWN"} +{"text":"The question whether the intersection of a FSA and an off-line parsable DCG is empty is undecidable .","label":5,"label_text":"OWN"} +{"text":"can elaborate onifdescribes an event , or","label":5,"label_text":"OWN"} +{"text":"The filtering tree is reversed and derives magic facts starting from the seed in a bottom-up fashion .","label":6,"label_text":"OTH"} +{"text":"closed-class vs. adjective ( numeral einen , einer ) ,","label":5,"label_text":"OWN"} +{"text":"Given the high level descriptions in sectionit remains only to formalise the decision process used to analyse a noun compound .","label":5,"label_text":"OWN"} +{"text":"the inside ;","label":5,"label_text":"OWN"} +{"text":"induction based on word type and context , using generalized left and right context vectors .","label":5,"label_text":"OWN"} +{"text":"To see how this works , let 's run through a simple example .","label":4,"label_text":"BKG"} +{"text":"In the latter case , two situations may arise : either the prefix is shared between nouns and some other category ( such as ment ) , or it must be barred from the list of noun endings ( such as aient , an inflectional marking of third person plural verbs ) .","label":5,"label_text":"OWN"} +{"text":"by locating the subject","label":5,"label_text":"OWN"} +{"text":"We use the estimates given by the standard back-off model , which satisfy that requirement .","label":5,"label_text":"OWN"} +{"text":"In fact , it helps explain infants ' ability to learn words from parental speech : these two sources alone are useful and infants have several others , like prosody and word stress patterns , available as well .","label":5,"label_text":"OWN"} +{"text":"In this section we analyze the results .","label":0,"label_text":"TXT"} +{"text":"This might , of course , end up in releasing the turn .","label":5,"label_text":"OWN"} +{"text":"Therein , the procedure is stated in terms of calls to an oracle which can determine if a noun compound is acceptable .","label":6,"label_text":"OTH"} +{"text":"Likewise , Bill gets the LF, and coordination results in the following LF for John and Bill :","label":6,"label_text":"OTH"} +{"text":"Not only would this allow the modelling of the restriction on centre-embedding , but it would also allow many other `` processing '' phenomena to be accurately characterized .","label":5,"label_text":"OWN"} +{"text":"Note in particular , that , because dominance is a transitive relation , and because of the inheritance condition on trees ( a node inherits the precedence relations of its ancestors ) , the two statements dom ( VP , NP) and prec ( V , NP) remain true after reanalysis .","label":6,"label_text":"OTH"} +{"text":"Some preliminary experiments with using measures such as perplexity and the average probability of hypotheses show that , while they do give an indication of convergence during re-estimation , neither shows a strong correlation with the accuracy .","label":5,"label_text":"OWN"} +{"text":"Note that this condition rules out the production responsible for building full binary trees since the x , y , x ' and y ' subtrees are not siblings in the mother 's tree despite the fact that all of the daughters share a common subtree z.","label":5,"label_text":"OWN"} +{"text":"is a combinatoric constant for taking account of the fact that we are not distinguishing permutations of the dependents ( e.g. there arepermutations of the r-dependents of h if these dependents are all distinct ) .","label":5,"label_text":"OWN"} +{"text":"In other words , object-level function application is handled simply by the meta-level function application .","label":5,"label_text":"OWN"} +{"text":"COLLABORATIVE PRINCIPLE :","label":5,"label_text":"OWN"} +{"text":"A complete phrase marker of the input string can then be constructed by following the manner in which the mother node from one proper branch is used as a daughter node in a dominating proper branch .","label":6,"label_text":"OTH"} +{"text":"The sequence of transitions corresponding to John likes Sue being a sentence , is given in Figure.","label":5,"label_text":"OWN"} +{"text":"In this paper , I report two experiments designed to determine how much manual training information is needed .","label":2,"label_text":"AIM"} +{"text":"As in the previous experiment we measured the precision , recall and coverage both on the lexicon and on the corpus .","label":5,"label_text":"OWN"} +{"text":"Precision is calculated as the ratio of manual-automatic matches \/ number of noun occurrences disambiguated by the procedure .","label":5,"label_text":"OWN"} +{"text":"The translation probability is then the sum of probabilities over different alignments f :","label":5,"label_text":"OWN"} +{"text":"Soundness in this context should be understood as the property that all parse trees in the parse forest grammar are valid parse trees .","label":5,"label_text":"OWN"} +{"text":"We satisfy the goal of favoring smaller grammars by choosing a prior that assigns higher probabilities to such grammars .","label":5,"label_text":"OWN"} +{"text":"`` Theories that analyse the distinction between the simple past and pluperfect purely in terms of different relations between reference times and event times , rather than in terms of event-connections , fail to explain whyis acceptable butis awkward . ''Lascarides and Asher 1993, pg. 470","label":6,"label_text":"OTH"} +{"text":"The monostratal , uniform treatment of syntax , semantics and phonology supports dataflow analysis , which is used extensively to provide the information upon which off-line compilation is based .","label":5,"label_text":"OWN"} +{"text":"``'' means that the correct answer was not obtained because the heuristics is segmentation filtered out from the correct segmentation .","label":5,"label_text":"OWN"} +{"text":"In this case , the generator either has to use a default or formulate a request for clarification in order to be able to continue its processing , i.e. , to produce an utterance .","label":4,"label_text":"BKG"} +{"text":"The accuracy results are shown in figure.","label":5,"label_text":"OWN"} +{"text":"I adopt the assumption that the participants in a dialogue are trying to achieve some purposeGrosz and Sidner 1986.","label":5,"label_text":"OWN"} +{"text":"Morphosyntactic Issues","label":4,"label_text":"BKG"} +{"text":"Immediately below the two rows of tones we see a row of numbers corresponding to the tones .","label":5,"label_text":"OWN"} +{"text":"Thus , althoughLakoff and Peters's arguments count against standard deletion analyses , they do not count as general arguments against a unified treatment of constituent and non-constituent coordination .","label":1,"label_text":"CTR"} +{"text":"According tovan Noord, two syntax-semantics pairs are linkable if their semantic forms are identical , i.e.","label":6,"label_text":"OTH"} +{"text":"Structural transfer can be incorporated to improve the efficiency of generation , but it is never necessary for correctness or even tractability .","label":5,"label_text":"OWN"} +{"text":"If there is an obligation to address a request , the actor will evaluate whether the request is reasonable , and if so , accept it , otherwise reject it , or , if it does not have sufficient information to decide , attempt to clarify the parameters .","label":5,"label_text":"OWN"} +{"text":"Resnik 1992developed a method for automatically extracting class-based SRs from on-line corpora .","label":6,"label_text":"OTH"} +{"text":"Several researchersPartee 1984,Hinrichs 1986,Nerbonne 1986,Webber 1988have sought to explain the temporal relations induced by tense by treating it as anaphoric , drawing onReichenbach's separation between event , speech , and reference timesReichenbach 1947.","label":6,"label_text":"OTH"} +{"text":"The main problem here is that such grammars have no notion of a degree of grammatical acceptability - a sentence is either grammatical or ungrammatical .","label":5,"label_text":"OWN"} +{"text":"for any node v ofit must be the case that","label":5,"label_text":"OWN"} +{"text":"Ifis rational then for each feature structure F , F is satisfiable iff F has a resolvant .","label":5,"label_text":"OWN"} +{"text":"Incremental interpretation allows on-line semantic filtering , i.e. parses of initial fragments which have an implausible or anomalous interpretation are rejected , thereby preventing ambiguities from multiplying as the parse proceeds .","label":6,"label_text":"OTH"} +{"text":"The examplebelow shows that the resolution of the anaphoric pronoun that must be performed first and that the PP starting with of be attached later .","label":5,"label_text":"OWN"} +{"text":"Although we have seen more than one transcription for a given Fsequence , it is inconvenient to be required to run the programs several times in order to see if more than one solution can be found .","label":5,"label_text":"OWN"} +{"text":"As things stand this definition is nearly isomorphic to that given for PCFGs , with the major differences being two changes which move us from rules to introduction relationships .","label":5,"label_text":"OWN"} +{"text":"We compared the best sentence hypothesis in each original lattice and in the modified one , and counted the word disagreements in which one of the hypotheses is correct .","label":5,"label_text":"OWN"} +{"text":"For example , a parse of the sentence John likes Mary becomes a mapping between an initial state , c, through some intermediate states , c, cto a final state ci.e. ccccIf we use a dynamic grammar to describe a shift reduce parser , states encode the current stack configuration , and are related by rules which correspond to shifting and reducing .","label":6,"label_text":"OTH"} +{"text":"The bottom-up left-corner ( BU-LC ) parser operates left-to-right and breadth-first , storing partial ( active ) constituents in a chart ;Carroll 1993gives a full description .","label":6,"label_text":"OTH"} +{"text":"a definition of the tag set to be used by the HMM ,","label":5,"label_text":"OWN"} +{"text":"( We often refer toas the model description length ) .","label":5,"label_text":"OWN"} +{"text":"Test instances consisted of a noun group ( i.e. , all the nouns in a numbered category ) together with a single word in that group to be disambiguated .","label":5,"label_text":"OWN"} +{"text":"In general , we have the following mapping between transcriptions under the two views of downstep :","label":5,"label_text":"OWN"} +{"text":"In a same paragraph , contiguous sentences are written in the same language","label":5,"label_text":"OWN"} +{"text":"The grammar employed is a partial characterisation ofChomsky's Government-Binding theoryChomsky 1981,Chomsky 1986and only takes account of very local constraints ( i.e. X-bar , Theta and Case ) ; a way of encoding all constraints in the proper branch formalismCrocker 1992will be needed before a grammar of sufficient coverage to be useful in corpora analysis can be formulated .","label":3,"label_text":"BAS"} +{"text":"The dialects of Scotland and the Isle of Man form a cluster with a great deal of internal diversity () , and all the sites in Ireland form another cluster averaging, with only Rathlin Island being indifferently classified .","label":5,"label_text":"OWN"} +{"text":"The first and second sequences were taken by extracting the initial 10 Fvalues from the third and fourth sequences , thereby avoiding the asymptotic behaviour of the longer sequences .","label":5,"label_text":"OWN"} +{"text":"The model presented corrects errors resulting from combining nonconcatenative strings as well as more standard morphological or spelling errors .","label":2,"label_text":"AIM"} +{"text":"Although these embody the spirit of the constraints found inChomsky 1981they are not intended to be entirely faithful to this specification of syntactic theory .","label":5,"label_text":"OWN"} +{"text":"One of the pioneering works is ` semantic differential 'Osgood 1952which analyses meaning of words into a range of different dimensions with the opposed adjectives at both ends ( see Figure) , and locates the words in the semantic space .","label":6,"label_text":"OTH"} +{"text":"DCGs are represented using the same notation we used for context-free grammars , but now of course the category symbols can be first-order terms of arbitrary complexity ( note that without loss of generality we don't take into account DCGs having external actions defined in curly braces ) .","label":6,"label_text":"OTH"} +{"text":"Since the programs performed about equally on finding transcriptions with an evaluation less than 7 , I shall display these transcriptions along with an indication of how many times each program found the transcription ( G = genetic , A = annealing ) .","label":5,"label_text":"OWN"} +{"text":"In an attempt to explore this notion further , we have investigated the approach to nominal semantics known as Qualia structurePustejovsky 1991and considered how this may complement the LF notion to improve its descriptive power .","label":3,"label_text":"BAS"} +{"text":"In general what we will look at in the results is how the tagging accuracy changes as the size of the tagset changes .","label":5,"label_text":"OWN"} +{"text":"These new properties are only the result of the interaction of the verb with the preposition .","label":5,"label_text":"OWN"} +{"text":"But no such increase in complexity is required under the present treatment .","label":5,"label_text":"OWN"} +{"text":"For example , Figurea shows a DRS for sentence, according to the principles above .","label":6,"label_text":"OTH"} +{"text":"Notice that while the feature structures at the root ofandare not compatible withand, they do agree with respect to those parts that are fully expanded at's root node .","label":5,"label_text":"OWN"} +{"text":"Through trimming this magic rule , e.g. , given a bounded term depthSato and Tamaki 1984or a restrictorShieber 1985, constructing an abstract unfolding tree reveals the fact that a cycle results from the magic rule .","label":5,"label_text":"OWN"} +{"text":"","label":5,"label_text":"OWN"} +{"text":"Given its goal to form a shared plan , and the fact that the current plan ( consisting of the single abstract move-commodity action ) is not executable , the actor will call the domain plan reasoner to elaborate the plan .","label":5,"label_text":"OWN"} +{"text":"Finally the problem of how to determine obligatory \/ optional cases based on dependencies acquired from data should also be addressed .","label":5,"label_text":"OWN"} +{"text":"We are currently still using the original Hodyne function because it works well in practice .","label":5,"label_text":"OWN"} +{"text":"However , I use theoremand resolved feature structures to yield a less general interpretation free characterisation of a satisfiable feature structure that admits of such an algorithm .","label":5,"label_text":"OWN"} +{"text":"Our working hypothesis is that syntactic behavior is reflected in co-occurrence patterns .","label":5,"label_text":"OWN"} +{"text":"The following list provides brief descriptions of the 25 senses of line in WordNet :","label":5,"label_text":"OWN"} +{"text":"describes a complex tense state .","label":5,"label_text":"OWN"} +{"text":"SRs have been used to express semantic constraints holding in different syntactic and functional configurations .","label":4,"label_text":"BKG"} +{"text":"This means , as desired , that for each choice of an eventof Mary 's telephoning , and reference time` just after ' it , there is a state of Sam 's being asleep , that surrounds.","label":6,"label_text":"OTH"} +{"text":"To find the most probable parse for a sentence , we simply find the path from word to word which maximizes the product of the state transitions ( as we have a first order Markov process ) .","label":5,"label_text":"OWN"} +{"text":"First we consider the features ofMel'cuk's treatment that we have wanted to preserve .","label":0,"label_text":"TXT"} +{"text":"Furthermore , re-entrancy in the form of shared feature structures within and across nodes will be found in PLPATR ( see for example Figure) .","label":5,"label_text":"OWN"} +{"text":"Compilation to a network may still make sense , however , and because these languages tend to exhibit few non-concatenative morphophonological phenomena other than vowel harmony , the continuation class mechanism may suffice to describe the allowed affix sequences at the surface level .","label":4,"label_text":"BKG"} +{"text":"It will assign a value close to 1.0 if two words share many neighbors , and 0.0 if they share none .","label":5,"label_text":"OWN"} +{"text":"In this definition R acts as a specification of the accessibility relationships which can hold between nodes of the trees admitted by the grammar .","label":6,"label_text":"OTH"} +{"text":"After some experimentation , the evaluation feature weights were set in the following way .","label":5,"label_text":"OWN"} +{"text":"Notice that the interpretation of the shading in this figure is different from that in previous figures .","label":5,"label_text":"OWN"} +{"text":"That is , one would predict a clash of temporal relations for sentence, since the simple pasts induce the forward progression of time but the conjunction indicates the reverse temporal ordering .","label":1,"label_text":"CTR"} +{"text":"Similarly , all noun phrases might be treated as mappings from an X \/ np to an X .","label":5,"label_text":"OWN"} +{"text":"If there is a final statein,","label":5,"label_text":"OWN"} +{"text":"We maintain a single hypothesis grammar which is initialized to a small , trivial grammar .","label":5,"label_text":"OWN"} +{"text":"By adding the extra utterance to the initial theory, uttered ( went ( all ( boys ) , theatre ) ) , one would obtain one optimistic model schema in which the conventional implicatures have been cancelled ( see figure) .","label":5,"label_text":"OWN"} +{"text":"No clear co-relation between Assoc and the manual diagnosis is found .","label":6,"label_text":"OTH"} +{"text":"Qualitatively , the algorithm does a good job in most of the categories .","label":5,"label_text":"OWN"} +{"text":"To achieve this , we need to interpret the relative pronouns in the two relative clauses as leaving a hole in the interpretation of clause and then abstracting with respect to that hole .","label":5,"label_text":"OWN"} +{"text":"Another set of approaches for computing distance was based on the phonetics .","label":6,"label_text":"OTH"} +{"text":"In the rest of this section we show the examples that exemplify these constraints .","label":0,"label_text":"TXT"} +{"text":"In examplethe fact that the key is mentioned only in the second sentence oflinkswith the second thread .","label":5,"label_text":"OWN"} +{"text":"The leaf distributions in decision trees are empirical estimates , i.e. relative-frequency counts from the training data .","label":6,"label_text":"OTH"} +{"text":"Suitable languages for dynamics are both formal and declarative , and are therefore also appropriate to express linguistic generalisations .","label":6,"label_text":"OTH"} +{"text":"There is a large body of psycholinguistic evidence which suggests that meaning can be extracted before the end of a sentence , and before the end of phrasal constituentsMarslen-Wilson 1973,Tanenhaus et al. 1990.","label":4,"label_text":"BKG"} +{"text":"Features can be used to control the application of rules to particular lexical items where the applicability cannot be deduced from spellings alone .","label":5,"label_text":"OWN"} +{"text":"To negotiate an agreement , each agent carries out means-end reasoning about the furniture pieces that they have that can be used in the floor plan .","label":6,"label_text":"OTH"} +{"text":"has only one antecedent .","label":5,"label_text":"OWN"} +{"text":"Then it is not possible to estimate probabilities from observed frequencies , and some other estimation scheme has to be used .","label":4,"label_text":"BKG"} +{"text":"In","label":5,"label_text":"OWN"} +{"text":"Proof :","label":5,"label_text":"OWN"} +{"text":"However , if we are content with being able to plot all categories on a two-dimensional plane , which probably is what we want to do , for ease of exposition , we only use the two first and most significant functions .","label":5,"label_text":"OWN"} +{"text":"Even ifwere considerably greater than, the cumulative negative effect of the longer states inwould eventually lead to the model giving the sentence with the shifted NPa higher probability .","label":5,"label_text":"OWN"} +{"text":"In such a setting , initial transition and symbol biases are replaced by frequencies of tag sequences and tag instantiation from a relatively small pre-tagged corpus .","label":6,"label_text":"OTH"} +{"text":"On this same test set , SPATTER scored 76 % .","label":1,"label_text":"CTR"} +{"text":"save the cost of constructing a thesaurus by hand ,","label":4,"label_text":"BKG"} +{"text":"It is this latter capability which distinguishes ignore rules from regular rules , as they are functionally equivalent otherwise , mainly serving as a notational aid for the grammar writer .","label":5,"label_text":"OWN"} +{"text":"With respect to nouns , we first selected productive endings ( iste , eau , eur , rice ) , until we realised a better choice was to assign a noun tag to all endings , with the exception of those previously assigned to other classes .","label":5,"label_text":"OWN"} +{"text":"This paper will focus on our temporal processing algorithm , and in particular on our analysis of narrative progression , rhetorical structure , perfects and temporal expressions .","label":2,"label_text":"AIM"} +{"text":"The lattice depicts the three levels of strength that seem to account for the inferences that pertain to natural language semantics and pragmatics : indefeasible information belongs to the u layer , infelicitously defeasible information belongs to the i layer , and felicitously defeasible information belongs to the d layer .","label":5,"label_text":"OWN"} +{"text":"This make the elliptical conjunct equivalent to","label":5,"label_text":"OWN"} +{"text":"Half the repetitions were accompanied by cue words .","label":5,"label_text":"OWN"} +{"text":"As a result of the explicit representation of filtering we do not need to postpone abstraction until run-time , but can trim the magic predicates off-line .","label":5,"label_text":"OWN"} +{"text":"A second design criterion on tagsets is the internal one of making the tagging as effective as possible .","label":4,"label_text":"BKG"} +{"text":"Ray , in, repeats Harry 's assertion from.","label":5,"label_text":"OWN"} +{"text":"Here , the internal states are not accessible , so we can't get interleaving of two coordinations , as in :","label":5,"label_text":"OWN"} +{"text":"The structure is almost the same asGunji's structure except for explicitly showing complex proposition , subordinate-clause and conjunctive-particle that are newly added to deal with complex sentences .","label":6,"label_text":"OTH"} +{"text":"Precision","label":5,"label_text":"OWN"} +{"text":"The current exploratory study uses control as a parameter for identifying these higher level structures .","label":2,"label_text":"AIM"} +{"text":"It is only at the sentence level in simple narrative texts that the presentation order and the natural order of evaluation necessarily coincide .","label":6,"label_text":"OTH"} +{"text":"Some examples of translations before and after the introduction of the new processing are given below .","label":5,"label_text":"OWN"} +{"text":"In practice , this restriction requires that sufficiently rich information be transferred from the previous translation stages to ensure that sign combination is deterministic .","label":5,"label_text":"OWN"} +{"text":"For the most part the import of these clauses should be clear .","label":5,"label_text":"OWN"} +{"text":"The relative pronoun which would , for instance , receive categorywithbeing implication in LP , i.e. , it requires as an argument `` an s lacking an np somewhere '' .","label":6,"label_text":"OTH"} +{"text":"We redifined cv by taking this tendency as the formula that follows :","label":5,"label_text":"OWN"} +{"text":"Harry interrupts her atsince he believes he has enough information to suggest a course of action , and tells her take your money .","label":5,"label_text":"OWN"} +{"text":"However , followingBernard Langwe argue that it might be fruitful to take the input more generally as a finite state automaton ( FSA ) to model cases in which we are uncertain about the actual input .","label":3,"label_text":"BAS"} +{"text":"In this way , if w and w ' have comparable frequencies but w ' has a sharper context distribution than w , thenis greater than.","label":5,"label_text":"OWN"} +{"text":"This often , but not always , makes the NOUN-SG tag irrelevant .","label":5,"label_text":"OWN"} +{"text":"How might one evaluate the relative contributions of each of these factors or compare two approaches to the same problem ?","label":2,"label_text":"AIM"} +{"text":"To satisfy this property , code words are assigned so that their lengths are frequency-based ; the length of the code word for a word of frequency f(w) will not be greater than :","label":5,"label_text":"OWN"} +{"text":"Because search starts from the current pointer location , items that have been stored most recently are more likely to be retrieved , predicting recency effectsBaddeley 1986.","label":6,"label_text":"OTH"} +{"text":"Second , we have modeled features as partial functions on the f-structure nodes - this ensures that any complex valued attribute is either undefined , or is associated with a unique sub-part of the current f-structure .","label":5,"label_text":"OWN"} +{"text":"The definitions of noun phrase countability given in Section, while useful for analyzing English , are not sufficient for translating from Japanese to English .","label":0,"label_text":"TXT"} +{"text":"The first step in the analysis , the computation of linguistic distance between each pair of sites , can be computed as Levenshtein distance between phonetic strings .","label":5,"label_text":"OWN"} +{"text":"The result is therefore similar to starting with the sentence :","label":6,"label_text":"OTH"} +{"text":"A final statedenotes recognition of a phrasewith e errors whereis a number of components in rule p .","label":6,"label_text":"OTH"} +{"text":"The former parses many sentences up to twice as fast , but a small proportion of the others are parsed almost twice as slowly .","label":5,"label_text":"OWN"} +{"text":"a strong belief that Dr. Smith will not be a visitor at IBM next year (visitor(Smith,IBM,next year)) , and","label":5,"label_text":"OWN"} +{"text":"The special status of Ulster contradicts the position ofO'Rahilly 1932that Connacht groups with Ulster to form a Northern dialect over against Munster .","label":5,"label_text":"OWN"} +{"text":"However , as the semantic terms become more complex , it is no trivial matter to write- reduction that will correctly handle variable capture .","label":1,"label_text":"CTR"} +{"text":"The inverse relationneed not be a function , allowing different numbers of words in the source and target sentences .","label":5,"label_text":"OWN"} +{"text":"All of the `` knowledge engineering '' is localised in the choice of tagset and the method of training .","label":4,"label_text":"BKG"} +{"text":"Our experiments have indicated that , in most cases , a less fragmented analysis is more desirable .","label":5,"label_text":"OWN"} +{"text":"Thus nominals and their modifiers pick out entities in a ( real or imaginary ) world , verbs and their modifiers refer to actions or events in which the entities participate in roles indicated by the edge relations .","label":5,"label_text":"OWN"} +{"text":"The similarityis computed in the following way .","label":5,"label_text":"OWN"} +{"text":"However , the accuracy is low .","label":1,"label_text":"CTR"} +{"text":"Since there are 22 words ( including punctuation ) the total number of strings would be","label":5,"label_text":"OWN"} +{"text":"Cawsey et al. 1993,Logan et al. 1994introduced the idea of utilizing a belief revision mechanismGalliers 1992to predict whether a set of evidence is sufficient to change a user 's existing belief and to generate responses for information retrieval dialogues in a library domain .","label":6,"label_text":"OTH"} +{"text":"the conjunction clause is searched for before either of the two s clauses ;","label":5,"label_text":"OWN"} +{"text":"Currently , also , results are returned in an order determined by the order of rules in the grammar .","label":5,"label_text":"OWN"} +{"text":"for eachand each, ifis defined thenis defined , and, and","label":5,"label_text":"OWN"} +{"text":"Tableshows the cross-table formed by the conditional and marginal distributions in the case ofand.","label":5,"label_text":"OWN"} +{"text":"Each of the final proper substrings of the sentence ( i.e. some books , Mary some books etc. ) can be used as a conjunct e.g.","label":4,"label_text":"BKG"} +{"text":"Ifis a good approximation of, association measures should be low ( near zero ) , otherwise deviating significantly from zero .","label":4,"label_text":"BKG"} +{"text":"In terms of general results , we have identified some factors that make evaluations of this type more complicated and which might lead us to evaluate solely quantitative results with care .","label":5,"label_text":"OWN"} +{"text":"Figureshows the representations for the sentence Bill became upset ; this will serve as the initial source clause representation for the examples that follow .","label":6,"label_text":"OTH"} +{"text":"A machine translation system needs to have this knowledge codified in some way .","label":4,"label_text":"BKG"} +{"text":"The ambiguity types resolved by this model are analysed and compared to ambiguity types of English and French .","label":5,"label_text":"OWN"} +{"text":"The scope node ,can be resolved to( ` every house ' takes wide scope ) , or( ` a Canadian flag ' takes wide scope ) .","label":5,"label_text":"OWN"} +{"text":"This has the advantage of making ` pure ' probabilities available , in that the X-bar probability will reflect the likelihood of the structure alone , and will be ` uncontaminated ' by any other information .","label":5,"label_text":"OWN"} +{"text":"It therefore seems likely that implementational decisions and optimisations based on subtle properties of specific grammars can , and may very often be , more important than worst-case complexity when considering the practical performance of parsing algorithms .","label":5,"label_text":"OWN"} +{"text":"Note that when we do not have enough data ( i.e. for small N ) , the thresholds will be large and few nodes tend to be linked , resulting in a simple model in which most of the case slots are judged independent .","label":5,"label_text":"OWN"} +{"text":"Recall","label":5,"label_text":"OWN"} +{"text":"Consider a situation in which an agent A wants an agent B to accept a proposal P .","label":6,"label_text":"OTH"} +{"text":"It requires to distinguish first , middles and last syllabs .","label":5,"label_text":"OWN"} +{"text":"Consider the following pair of sentences :","label":5,"label_text":"OWN"} +{"text":"Since complete specification of transfer operations is not required for correct generation of grammatical target text , the version of Shake-and-Bake translation presented here maintains its advantage over traditional transfer models , in this respect .","label":5,"label_text":"OWN"} +{"text":"This leads to distortions of the probability estimates since the training algorithm spends part of its probability mass on impossible structures .","label":5,"label_text":"OWN"} +{"text":"Again , interpretation seen as description building sits easily with this .","label":5,"label_text":"OWN"} +{"text":"Two intensional verbs such as want and seek are also coordinated at their basic ( higher ) types :","label":6,"label_text":"OTH"} +{"text":"In contrast to LIG , PLPATR licenses structure sharing on the right hand side of productions .","label":5,"label_text":"OWN"} +{"text":"Tableshows possible structures for four words sequence and their occurrence in all data collections .","label":5,"label_text":"OWN"} +{"text":"I would also like to acknowledge valuable advice from Tracy Holloway King and Steven Abney , who commented on earlier versions of this paper .","label":5,"label_text":"OWN"} +{"text":"Although the grammar is very limited , the use of probabilities in ranking the parser 's output can be seen as a first step towards implementing a principle-based parser using a more fully specified collection of grammar modules .","label":5,"label_text":"OWN"} +{"text":"In other words , if only ( P , Q ) holds then P satisfies Q and nothing else does .","label":5,"label_text":"OWN"} +{"text":"In the first experiment the whole of each corpus was used to train the model , and a small sample from the same text was used as test data .","label":5,"label_text":"OWN"} +{"text":"We further subdivide pluralia tanta into two types , those that can use the classifier pair to be denumerated , such as a pair of scissors and those that can't , such as clothes .","label":4,"label_text":"BKG"} +{"text":"Because of the inevitable differences in their beliefs about the world -- specifically about what is salient -- the agents may have to collaborate to make the expression adequate .","label":4,"label_text":"BKG"} +{"text":"However , now consider processing the rabbit in none of the boxes .","label":6,"label_text":"OTH"} +{"text":"This is impossible to do with corpus analysis alone .","label":5,"label_text":"OWN"} +{"text":"We now state the relation between the semantic representations of the source language and target language .","label":5,"label_text":"OWN"} +{"text":"Then , after the meta-level- reduction using the new scoped constant c , the following goal is called :","label":5,"label_text":"OWN"} +{"text":"The choice is hypothesized to depend on cognitive properties of B , e.g. what B knows , B 's attentional state , and B 's processing capabilities , as well as properties of the task and the communication channel .","label":5,"label_text":"OWN"} +{"text":"Either could be a valuable aid in choosing the correct correction .","label":5,"label_text":"OWN"} +{"text":"In addition , people are prone to mistakes in writing sentences .","label":4,"label_text":"BKG"} +{"text":"Our analysis predicts this fact in the same way asPartee and Rooth's analysis does .","label":3,"label_text":"BAS"} +{"text":"The knowledge source that is used for default handling should provide the most plausible actions for a default situation .","label":5,"label_text":"OWN"} +{"text":"We also recommended using an underspecified representation of temporal \/ rhetorical structure to avoid generating all solutions until higher-level knowledge can aid in reducing ambiguity .","label":5,"label_text":"OWN"} +{"text":"Most linguistic approaches account for the defeasibility of pragmatic inferences by analyzing them in a context that consists of all or some of the previous utterances , including the current one .","label":6,"label_text":"OTH"} +{"text":"One case where the dynamic grammars correctly violate the substring hypothesis is when a string already involves a coordination .","label":5,"label_text":"OWN"} +{"text":"They have both singular and plural forms , and can also be used with much .","label":4,"label_text":"BKG"} +{"text":"This should have the advantage over the two level error rules in that it uses a good method of calculating likely error positions and because a set of correction possibilities can be generated fairly cheaply .","label":5,"label_text":"OWN"} +{"text":"The Achilles heel for most theories of presupposition has been their vulnerability to the projection problem .","label":1,"label_text":"CTR"} +{"text":"Every element ofmust be realized in.","label":6,"label_text":"OTH"} +{"text":"There are two principal sources for the parameters of the model .","label":4,"label_text":"BKG"} +{"text":"In our first experiment , we tried to acquire slot-based case frame patterns .","label":5,"label_text":"OWN"} +{"text":"correct categorization - inferred classes should correspond to the correct senses of the words that are being generalized - ,","label":5,"label_text":"OWN"} +{"text":"However we use semantic role like `` agent '' , `` patient '' , `` experiencer '' , and so on , as argument roles of soa .","label":5,"label_text":"OWN"} +{"text":"Thus there is a trade-off between the simplicity of a model and the goodness of fit to data .","label":4,"label_text":"BKG"} +{"text":"Our microplanner is restricted to the treatment of the reference choices for the inference methods and for the previously presented intermediate conclusions .","label":5,"label_text":"OWN"} +{"text":"The important and indispensable part of semantics of complex sentence is , roughly speaking , the relation between a subordinate clause and the main clause .","label":5,"label_text":"OWN"} +{"text":"The resulting model more readily accounts for discourse behavior in adversarial situations and other situations where it is implausible that the agents adopt each others goals .","label":5,"label_text":"OWN"} +{"text":"So far we have only considered semantic representations which do not involve quantifiers ( except for the existential quantifier introduced by the mechanism above ) .","label":5,"label_text":"OWN"} +{"text":"However , the head of the subject is then found and number agreement with the verb can be assessed .","label":5,"label_text":"OWN"} +{"text":"It can be glossed as follows .","label":5,"label_text":"OWN"} +{"text":"We use SEM_IND as an abbreviation for the feature path SEM.CONT.IND .","label":5,"label_text":"OWN"} +{"text":"Besides the intrinsic difficulties of this approach , it does not seem appropriate when comparing across different techniques for learning SRs , because of its qualitative flavor .","label":5,"label_text":"OWN"} +{"text":"As for the technical papers , the average length ratio ( abstract \/ original ) was 24 % , and the coverage of the key sentence and the most important key sentence were 51 % and 74 % respectively .","label":5,"label_text":"OWN"} +{"text":"The weight introduced incould alternatively be found in a local manner , in such a way that more polysemous nouns would give less evidence to each one of their senses than less ambiguous ones .","label":5,"label_text":"OWN"} +{"text":"Work described in this paper started from an idea of an error processor that would sit on top of an editor , detecting \/ correcting errors just after entry , while the user continued with further text , relieved from tedious backtracking .","label":2,"label_text":"AIM"} +{"text":"Future work will investigate the effect of training the networks on the positive examples alone .","label":5,"label_text":"OWN"} +{"text":"First , we evaluated the guessing rules against the actual lexicon : every word from the lexicon , except for closed-class words and words shorter than five characters , was guessed by the different guessing strategies and the results were compared with the information the word had in the lexicon .","label":5,"label_text":"OWN"} +{"text":"In different iterations over these candidate classes , two operations are performed : first , the class , c , having the best Assoc ( best class ) , is extracted for the final result ; and second , the remaining candidate classes are filtered from classes being hyper \/ hyponyms to the best class .","label":5,"label_text":"OWN"} +{"text":"In this example once an algorithm fails atit will fail onandas well since the choices of a cospecifier in the following examples are dependent on the choice in.","label":5,"label_text":"OWN"} +{"text":"Finally , we deal with sentences such as, which contain an iteration of an implicit generic quantifier and always .","label":5,"label_text":"OWN"} +{"text":"We ran discriminant analysis on the texts in the corpus using several different features as seen in table.","label":5,"label_text":"OWN"} +{"text":"Scoped propositions can then be obtained by using an outside-in quantifier scoping algorithmLewin 1990, or an inside-out algorithm with a free variable constraintHobbs and Shieber 1987.","label":6,"label_text":"OTH"} +{"text":"In this case the extraction process is invoked recursively to extract subrules rooted in the current node .","label":6,"label_text":"OTH"} +{"text":"Brilladdresses the problem of finding a valid metric for distituency by using a generalized mutual information statistic .","label":6,"label_text":"OTH"} +{"text":"There are also potential computational applications for incremental interpretation , including early parse filtering using statistics based on logical form plausibility , and interpretation of fragments of dialogues ( a survey is provided byMilward and Cooper 1994, henceforth referred to asM and C) .","label":4,"label_text":"BKG"} +{"text":"In a sentence like Le secteur matires ( NOUN-PL ) plastiques ( ADJ-PL \/ NOUN-PL \/ VERB-P 1 P 2 ) ... the verb reading for plastiques is impossible .","label":5,"label_text":"OWN"} +{"text":"Kehleradopts an analysis where ( referential ) arguments to verbs are represented as related to a Davidsonian event via thematic role functions , e.g..","label":6,"label_text":"OTH"} +{"text":"Exampleseems slightly unnatural , but it is much improved if we replace books by a heavier string such as books about gardening .","label":4,"label_text":"BKG"} +{"text":"For example , consider the ambiguous sentence ,","label":1,"label_text":"CTR"} +{"text":"The fact that each string received multiple parses ( the mean number of analyses being 9.135 , and the median , 6 ) suggests that the probabilistic information did favourably guide the selection of a single analysis .","label":5,"label_text":"OWN"} +{"text":"LHIP provides a processing method which allows selected portions of the input to be ignored or handled differently .","label":5,"label_text":"OWN"} +{"text":"The CCG rulesshown in Figureare implemented in the system described in this paper .","label":6,"label_text":"OTH"} +{"text":"For typed feature structures lacking re-entrancy we believe our proposal to be the simplest and most natural which is available .","label":5,"label_text":"OWN"} +{"text":"The following is the basic Magic algorithm taken fromRamakrishnan et al. 1992.","label":6,"label_text":"OTH"} +{"text":"The individual default descriptions should take into account the global constraints for processing stated in the knowledge sources of the system .","label":5,"label_text":"OWN"} +{"text":"FollowingLiberman et al., voice pitch was varied by getting the informant to speak at different volumes and by adjusting the recording level appropriately .","label":3,"label_text":"BAS"} +{"text":"Now , since the counts of this sequent must be balanced , the sequencemust contain for eachexactly 3and exactly Nas subformulae .","label":5,"label_text":"OWN"} +{"text":"These are :","label":5,"label_text":"OWN"} +{"text":"To illustrate the algorithm I zoom in on the application of the above algorithm to one particular grammar rule .","label":6,"label_text":"OTH"} +{"text":"And there is no reason why the items characterizing the sentence have to be ( sequences of ) words ; occurrences of grammar rules , either without any context or in the context of , say , the rules occurring just above them in the parse tree , can be treated in just the same way .","label":5,"label_text":"OWN"} +{"text":"Dependency representations have been used in large scale qualitative machine translation systems , notably byMcCord 1988.","label":5,"label_text":"OWN"} +{"text":"There is no single best answer : it depends on one 's goals .","label":5,"label_text":"OWN"} +{"text":"So , ifthen the denotation of typecontains the denotation of type.","label":5,"label_text":"OWN"} +{"text":"the evaluation of the accuracy of the rhetorical structure analysis carried out previouslySumita 1992showed 74 % .","label":5,"label_text":"OWN"} +{"text":"A context-free grammar is represented as a definite-clause specification as follows .","label":5,"label_text":"OWN"} +{"text":"This work is being expanded to address issues pertaining to discourse structure and inter-segment coherence .","label":5,"label_text":"OWN"} +{"text":"It may be that an appropriate choice of scoring function will circumvent this difficulty , but this is left as a matter for further research .","label":5,"label_text":"OWN"} +{"text":"For n-gram models , we triedfor each domain .","label":6,"label_text":"OTH"} +{"text":"This is implied by the first goal of the predicate principles\/3 , the constituent order principle , which determines how the PHON value of a constituent is constructed from the PHON values of its daughters .","label":5,"label_text":"OWN"} +{"text":"Thus , using the abstract syntax capabilities of-Prolog , we can have a direct implementation of the underlying linguistic formalism , in stark contrast to the first-order simulation shown in Figure.","label":5,"label_text":"OWN"} +{"text":"There are 11 such cases for four kanzi character words , 35 such cases for five kanzi character words , and 29 cases for six kanzi character words .","label":5,"label_text":"OWN"} +{"text":"pre-subject - subject - predicate","label":5,"label_text":"OWN"} +{"text":"A simple , uninteresting example to fix some notation :","label":4,"label_text":"BKG"} +{"text":"It is plausible that the deviation of the cells not taken into account by Assoc can help on extracting useful SRs .","label":5,"label_text":"OWN"} +{"text":"On the other hand , how many of the noun occurrences in the testing sample have the correct sense introduced in the taxonomy : 2,165 of the 2,372 well-extracted triples ( 92.3 % ) .","label":5,"label_text":"OWN"} +{"text":"After further testing , we again re-enter the rewrite phase and this time note that brown can be inserted in the maximal TNCB the dog barked adjoined with dog ( figure) .","label":5,"label_text":"OWN"} +{"text":"Here , we have separated the context into a contingent set of contextual propositions S and a set R of ( monolingual ) ` meaning postulates ' , or selectional restrictions , that constrain the word sense predicates in all contexts .","label":5,"label_text":"OWN"} +{"text":"Another type of false positive example is","label":5,"label_text":"OWN"} +{"text":"In any specific application area it will be unlikely that the text database to be accessed will be completely free form .","label":5,"label_text":"OWN"} +{"text":"Since beliefsandabove constitute a warranted piece of evidence against the proposed belief and beliefsandconstitute a strong piece of evidence against it , the system will not accept On-Sabbatical(Smith,next year) .","label":5,"label_text":"OWN"} +{"text":"Mental actions in the intermediate plans of a referring expression plan allow the speaker to choose the most salient attributes that have not yet been chosen , and constraints in the surface speech actions make sure the speaker believes that each attribute is true .","label":5,"label_text":"OWN"} +{"text":"The reason is that this construction typically yields an enormous amount of rules that are ` useless ' .","label":4,"label_text":"BKG"} +{"text":"In the past research , the distributional pattern of each case slot is learned independently , and methods of resolving ambiguity are also based on the assumption that case slots are independentHindle and Rooth 1991,Chang et al. 1992,Sekine et al. 1992,Resnik 1992,Grishman and Sterling 1994,Alshawi and Carter 1995,Li and Abe 1995, or dependencies between at most two case slots are consideredBrill and Resnik 1994,Ratnaparkhi et al. 1994,Collins and Brooks 1995.","label":6,"label_text":"OTH"} +{"text":"Unfortunately since researchers do not even agree on which phenomena can be explained syntactically and which semantically , the boundaries between two modules are rarely the same in NL systems .","label":4,"label_text":"BKG"} +{"text":"We discuss these below .","label":5,"label_text":"OWN"} +{"text":"Whenever a linguistic object is recognized , it is thrown into the solution of ChAM , and acts as a molecule .","label":5,"label_text":"OWN"} +{"text":"Since the late ' 80 s part-of-speech ( POS ) disambiguation using Hidden Markov Models ( HMM ) has been a widespread method for tagging texts .","label":4,"label_text":"BKG"} +{"text":"The local normalizing technique using the uniform distribution does not help .","label":5,"label_text":"OWN"} +{"text":"Using these two search modes , SPATTER guarantees that it will find the highest probability parse .","label":5,"label_text":"OWN"} +{"text":"Here defaults allow for a standalone handling of the problem .","label":4,"label_text":"BKG"} +{"text":"These senses were chosen because they are clearly different and we could collect sufficient number ( more than 20 ) of context examples .","label":5,"label_text":"OWN"} +{"text":"However , this is not allowed by Update Semantics which is eliminative : each new piece of information can only further refine the set of worlds .","label":6,"label_text":"OTH"} +{"text":"This will involve acknowledging or repairing user utterances , as well as repairing and requesting acknowledgement of the system 's own utterances .","label":5,"label_text":"OWN"} +{"text":"We define five operations on a TNCB .","label":5,"label_text":"OWN"} +{"text":"However , the statistical analyses still show that there is no significant difference in the performance of the algorithms in general .","label":5,"label_text":"OWN"} +{"text":"whereis thickness of the k-th link of, andis activity ( at time T ) of the node referred by the k-th link of.","label":5,"label_text":"OWN"} +{"text":"Class-based and similarity-based models provide an alternative to the independence assumption .","label":6,"label_text":"OTH"} +{"text":"We address the problem of automatically constructing a thesaurus by clustering words based on corpus data .","label":2,"label_text":"AIM"} +{"text":"Tableshows some examples of speech recognition disagreements between the two models .","label":5,"label_text":"OWN"} +{"text":"In order to explore the relationship of control and initiative to discourse processes like centering , we analyze the distribution of four different classes of anaphora for two data sets .","label":2,"label_text":"AIM"} +{"text":"That phonetic comparison is more precise is not particularly surprising , since etymon identity ignores a wealth of phonetic , phonological , and morphological data , whereas comparing phones has the side effect of also counting higher-level variation : if words differ in morphemes , their phonetic difference is going to be high .","label":5,"label_text":"OWN"} +{"text":"Using a thesaurus constructed by our method can improve pp-attachment disambiguation results .","label":5,"label_text":"OWN"} +{"text":"Yes or no answers to questions were also classified as assertions on the grounds that they were supplying the listener with factual information ;","label":5,"label_text":"OWN"} +{"text":"The classification was then applied to all natural contexts of the Brown corpus .","label":5,"label_text":"OWN"} +{"text":"This algorithm considers the words in W pairwise , avoiding the tractability problems in considering all possible combinations of senses for the group (if each word had m senses ) .","label":5,"label_text":"OWN"} +{"text":"Figureillustrates this proof-logical behaviour .","label":4,"label_text":"BKG"} +{"text":"Greensolves the problem by extending the boundaries of the analysis to discourse units .","label":6,"label_text":"OTH"} +{"text":"Since the number of parameters that exist in a multi-dimensional joint distribution is exponential if we allow n-ary dependencies in general , it is infeasible to estimate them with high accuracy with a data size available in practice .","label":5,"label_text":"OWN"} +{"text":"The syntactic rules inrelate ` category ' predicatesholding of a string and two spanning substrings ( we limit the rules here to two daughters for simplicity ) :","label":5,"label_text":"OWN"} +{"text":"For the control phases , we found that three types of utterances ( prompts , repetitions and summaries ) were consistently used to signal control shifts .","label":5,"label_text":"OWN"} +{"text":"This suggests that conditioning the occurrence of a grammar rule on the identity of its mother ( as in the 2-rule case ) accounts for some , but not all , of the contextual influences that operate .","label":5,"label_text":"OWN"} +{"text":"We also compare with the English language and draw some conclusions on the benefits of our approach .","label":5,"label_text":"OWN"} +{"text":"Shared Environment Mutual Belief Induction Schema","label":6,"label_text":"OTH"} +{"text":"Tableshows the result of the analysis for four , five , and six kanzi character sequences .","label":5,"label_text":"OWN"} +{"text":"In those simulations which used the phonotactic knowledge , a word boundary could not be inserted when doing so would create a word initial or final consonant cluster not on the list or would create a word without a vowel .","label":5,"label_text":"OWN"} +{"text":"Three things are striking about this data .","label":5,"label_text":"OWN"} +{"text":"So ,andare computed as follows :","label":5,"label_text":"OWN"} +{"text":"The original rules of the program are extended such that these bindings can be made effective .","label":6,"label_text":"OTH"} +{"text":"As a noun phrase 's countability in English is affected by its referential property ( generic , referential or ascriptive ) we present a method of determining the referential use of Japanese noun phrases .","label":2,"label_text":"AIM"} +{"text":"Therefore , the case value can be inherited .","label":5,"label_text":"OWN"} +{"text":"Sectiondescribes the substitutional treatment of ellipsis by way of a few examples presented in a simplified version of Quasi Logical Form ( QLF )Alshawi and Crouch 1992,Alshawi et al. 1992.","label":0,"label_text":"TXT"} +{"text":"In particular , we compared the performance of an MDL-based simulated annealing algorithm in hierarchical word clustering against that of one based on the Maximum Likelihood Estimator ( MLE , for short ) .","label":1,"label_text":"CTR"} +{"text":"As noted above , the precise grammaticality predictions depend on the kind of parsing model which is encoded in the states .","label":5,"label_text":"OWN"} +{"text":"The basic shortcoming of the maximum-likelihood objective function is that it does not encompass the compelling intuition behind Occam 's Razor , that simpler ( or smaller ) grammars are preferable over complex ( or larger ) grammars .","label":1,"label_text":"CTR"} +{"text":"Since online text becomes available in ever increasing volumes and an ever increasing number of languages , there is a growing need for robust processing techniques that can analyze text without expensive and time-consuming adaptation to new domains and genres .","label":4,"label_text":"BKG"} +{"text":"This allows peripheral extraction , where the ` gap ' is at the start or the end of e.g. a relative clause .","label":6,"label_text":"OTH"} +{"text":"Conversational partners not only respond to what others say , but feel free to volunteer information that is not requested and sometimes ask questions of their ownNickerson 1976.","label":4,"label_text":"BKG"} +{"text":"Additional code is added to maintain a chart of known successes and failures of each rule .","label":5,"label_text":"OWN"} +{"text":"The precise grammaticality predictions made by the dynamic approach depend upon the characterisation of the states , and hence depend on the particular parsing strategy which is specified by the dynamics .","label":5,"label_text":"OWN"} +{"text":"Unfortunately , input information can be insufficient in two respects :","label":4,"label_text":"BKG"} +{"text":"At present the word in focus is always the newest word in the purview .","label":5,"label_text":"OWN"} +{"text":"This engineering approach requires great effort in designing the representation and the mapping rules .","label":1,"label_text":"CTR"} +{"text":"Some form of text analysis is required to collect such a collection of pairs .","label":5,"label_text":"OWN"} +{"text":"Thus an ending-guessing rule looks exactly like a morphological rule apart from the I-class which is always void .","label":5,"label_text":"OWN"} +{"text":"I now show that the question whether the intersection of a FSA and an off-line parsable DCG is empty is undecidable .","label":5,"label_text":"OWN"} +{"text":"On the same 12 000-word test corpus , we counted 46 occurrences of words which have different meanings for the masculine and the feminine noun readings .","label":5,"label_text":"OWN"} +{"text":"Thus ` A , B ' indicates that A precedes B in the input , perhaps with some intervening material .","label":5,"label_text":"OWN"} +{"text":"For example ,Beaven 1992aemploys a chart to avoid recalculating the same combinations of signs more than once during testing , andPopowich 1994proposes a more general technique for storing which rule applications have been attempted ;Brew 1992avoids certain pathological cases by employing global constraints on the solution space ; researchers such asBrown et al. 1990andChen and Lee 1994provide a system for bag generation that is heuristically guided by probabilities .","label":6,"label_text":"OTH"} +{"text":"Hobbsalgorithm gets the right antecedent for it in, which is the little handle , but then fails on it in, whereas theBrennan et al.algorithm has the pump centered atand continues to select that as the antecedent for it throughout the text .","label":5,"label_text":"OWN"} +{"text":"Examples of prompts were things like `` Yes '' and `` Uhu '' .","label":5,"label_text":"OWN"} +{"text":"The propositions formed can then be judged for plausibility .","label":6,"label_text":"OTH"} +{"text":"Constituent and non-constituent coordination have been treated as entirely separate phenomena ( seevan Oirsouw 1987for discussion ) , and different mechanisms have been proposed for each .","label":4,"label_text":"BKG"} +{"text":"We would like to extend the system by using a more detailed transcription system .","label":5,"label_text":"OWN"} +{"text":"However , other experimental results suggest that distance vectors contain some different semantic information from co-occurrence vectors .","label":5,"label_text":"OWN"} +{"text":"Again , errors mainly affect the assignment of words to subclasses within one major word class .","label":5,"label_text":"OWN"} +{"text":"PROOF .","label":5,"label_text":"OWN"} +{"text":"The abstract unfolding tree in figureclearly shows why there exists the need for subsumption checking :","label":5,"label_text":"OWN"} +{"text":"Multiple VP ellipsisGardent 1993poses problems at the level of determining which VP is the antecedent of which ellipsis .","label":5,"label_text":"OWN"} +{"text":"This smoothing is also performed on the Inside-Outside post-pass of our algorithm .","label":6,"label_text":"OTH"} +{"text":"A related issue concerns the interpretation of embedded Prolog code .","label":5,"label_text":"OWN"} +{"text":"This paper falls directly between these approaches , using statistical information derived from corpora analysis to weight syntactic analyses produced by a ` principles and parameters ' parser .","label":2,"label_text":"AIM"} +{"text":"Using the meanings for Bill , supported , Hillary , and opposed , R-relationsand, and Axiom I , we can derive meanings for Bill supported and Hillary opposed in the fashion described in Section:","label":5,"label_text":"OWN"} +{"text":"The probability of a parse is just the product of the probability of each of the actions made in constructing the parse , according to the decision-tree models .","label":5,"label_text":"OWN"} +{"text":"However , instead of letting each daughter node contribute with the full entropy of the LHS phrase of the corresponding grammar rule , these entropies are weighted with the relative frequency of use of each alternative choice of grammar rule .","label":5,"label_text":"OWN"} +{"text":"Heuristics for deciding how to use re-estimation in an effective manner are given .","label":5,"label_text":"OWN"} +{"text":"Secondly , its aim is not to tackle data sparseness by grouping a large number of objects into a smaller number of classes , but to increase the precision of the model by dividing a single object ( the training corpus ) into some larger number of sub-objects ( the clusters of sentences ) .","label":2,"label_text":"AIM"} +{"text":"The present paper is concerned with tagging languages and sublanguages for which no a priori knowledge about grammatical categories is available , a situation that occurs often in practiceBrill and Marcus 1992a.","label":2,"label_text":"AIM"} +{"text":"This paper presents a method for automatic sense disambiguation of nouns appearing within sets of related nouns -- the kind of data one finds in on-line thesauri , or as the output of distributional clustering algorithms .","label":2,"label_text":"AIM"} +{"text":"Structure reduction","label":5,"label_text":"OWN"} +{"text":"The corpus likelihood is then, and the per-word entropy ,, is thus minimized .","label":5,"label_text":"OWN"} +{"text":"Ifis terminal but not equal to, then addtoif possible .","label":6,"label_text":"OTH"} +{"text":"If we change the definition of success in the task , we change whether a strategy is beneficial .","label":5,"label_text":"OWN"} +{"text":"The basic framework on which the implementation is built is similar to Tree Adjoining GrammarJoshi et al. 1975.","label":3,"label_text":"BAS"} +{"text":"Therefore it is much narrower notion .","label":1,"label_text":"CTR"} +{"text":"Some of them have already been introduced in section.","label":0,"label_text":"TXT"} +{"text":"The overall model splits the contributions of contentordering.","label":5,"label_text":"OWN"} +{"text":"Our endorsements are based on the semantics of the utterance used to convey a belief , the level of expertise of the agent conveying the belief , stereotypical knowledge , etc .","label":5,"label_text":"OWN"} +{"text":"lhip_success","label":5,"label_text":"OWN"} +{"text":"Of those unknown words , 9385 ( i.e. about 70 % ) are capitalised words , which are correctly and unambiguously analysed by the guesser as proper nouns with more than 95 % accuracy .","label":5,"label_text":"OWN"} +{"text":"The superior performance of this model confirms results presented byBriscoe et al. 1994,Merialdo 1994, andElworthy 1994for English : empirically obtained initial values for transition and output probabilities with a small number of training iterations lead to significantly better results than intuitively generated biases do .","label":5,"label_text":"OWN"} +{"text":"This goal is analogous to that used in the work described earlier on finding word classes by clustering .","label":5,"label_text":"OWN"} +{"text":"We represent the elliptical sentence , again abbreviated , as a ( partially resolved ) QLF :","label":5,"label_text":"OWN"} +{"text":"An appropriate formalization for utteranceand the necessary semantic and pragmatic knowledge is given in.","label":5,"label_text":"OWN"} +{"text":"Clearly then , the order of evaluation of the complements in a rule can profoundly influence the efficiency of generation , and an efficient head-driven generator must order the evaluation of the complements in a rule accordingly .","label":5,"label_text":"OWN"} +{"text":"Now , add to h a random number in the rangeand check that the result is still in the range.","label":6,"label_text":"OTH"} +{"text":"However , in order to disambiguate the tag and place the subject markers it is only necessary to know that it is a noun or else a verb .","label":5,"label_text":"OWN"} +{"text":"Here , two unattached lexical items have been identified , together with two instances of rule 4 , which combines a NP with a postmodifying PP .","label":5,"label_text":"OWN"} +{"text":"I also wish to thank the NLP group at SICS for contributing to a very conductive atmosphere to work in , and in particular Ivan Bretan for valuable comments on draft versions of this article .","label":5,"label_text":"OWN"} +{"text":"At the merging phase , rules which have not scored high enough to be included into the final rule-sets are merged into more general rules , then re-scored and depending on their score added to the final rule-sets .","label":5,"label_text":"OWN"} +{"text":"Both of the predictive parsers employ one symbol of lookahead , incorporated into the parsing tables by the LALR technique .","label":5,"label_text":"OWN"} +{"text":"The corpus results are better because the training technique explicitly targeted the rule-sets to the most frequent cases of the corpus rather than the lexicon .","label":5,"label_text":"OWN"} +{"text":"Some of them because they didn't have the correct sense included in the WordNet taxonomy , and others because the correct class had not been induced because there wasn't enough evidence .","label":5,"label_text":"OWN"} +{"text":"Hobbsalgorithm operates on one sentence at a time , but the structure of previous sentences in the discourse is available .","label":6,"label_text":"OTH"} +{"text":"For reasons made clear below , only sequences consisting entirely of words from Roget 's thesaurus were retained , giving a total of 308 test triples .","label":5,"label_text":"OWN"} +{"text":"We can merge two rules which have scored below the threshold and have the same affix ( or ending ) and the initial class ( I ) .","label":5,"label_text":"OWN"} +{"text":"The homogeneity becomes especially helpful in the case where the input verifies the default assumption rendering unnecessary any recomputation .","label":5,"label_text":"OWN"} +{"text":"The role motivated is the link between the content of subordinate clause and the main clause .","label":5,"label_text":"OWN"} +{"text":"Further examples exist where productive morphological processes ( e.g. , affixation ) lead to the lexicalisation in one language of concepts that exist as syntagmatic constructs in another .","label":5,"label_text":"OWN"} +{"text":"Heuristics 4 : inserted phrases between commas or parentheses","label":5,"label_text":"OWN"} +{"text":"This is partially due to the reduced number of free parameters in the information-theoretical networks .","label":5,"label_text":"OWN"} +{"text":"These are now extended to 15 words in the pre-subject , 12 in the subject - see Section.","label":5,"label_text":"OWN"} +{"text":"Using the notationto represent a state of basic category A carrying a category B on its stack , the hierarchical structure of the sentence :","label":5,"label_text":"OWN"} +{"text":"If we also consider words joined with clitics , the number of different combinations is much higher , namely 6525 .","label":6,"label_text":"OTH"} +{"text":"However , we have introduced distinctions between singular nouns ( NOUN-SG ) , plural nouns ( NOUN-PL ) and number-invariant nouns ( NOUN-INV ) such as taux ( rate \/ rates ) .","label":5,"label_text":"OWN"} +{"text":"We extracted , from a corpus of newspaper articles ( Libration ) , a list of 13 500 words unknown to the basic lexicon .","label":5,"label_text":"OWN"} +{"text":"Recall is the number of correct tokens divided by the total number of tokens of t ( in the first column ) .","label":5,"label_text":"OWN"} +{"text":"Target clauses in gapping constructions are therefore represented with the overt constituents fronted out of an elided sentence node ; for instance the representation of the target clause in exampleis shown in Figure( the empty node is indicated by) .","label":5,"label_text":"OWN"} +{"text":"As well as their wide variability with respect to the BU-LC parser , the absolute variability of the LR parse times is high ( reflected in large standard deviations ---- see Table) .","label":5,"label_text":"OWN"} +{"text":"As the COMPASS project makes ample use of Xerox technology for its core look-up engine and for POS disambiguation for languages other than German , the obvious thing to do was to develop a German language model for the Xerox tagger .","label":3,"label_text":"BAS"} +{"text":"whereandare activity ( at time T ) collected from the nodes referred in the rfrant and rfr respectively ;is activity given from outside ( at time T ) ; the output functionlimits the value to [ 0,1 ] .","label":5,"label_text":"OWN"} +{"text":"Figureshows the change in disambiguation precision as the corpus size for co-occurrence statistics increases from 200 words to 20 M words .","label":5,"label_text":"OWN"} +{"text":"Two experiments were conducted on three corpora : 300 k words of Swedish text from the ECI Multilingual CD-ROM , and 100 k words each of English and French from a corpus of International Telecommunications Union text .","label":5,"label_text":"OWN"} +{"text":"The second is to use a calculus in which types can undergo ` type-raising 'Dowty 1988, or can be formed by abstraction ( as in the Lambek Calculus ,Lambek 1958) .","label":6,"label_text":"OTH"} +{"text":"For each of the nodes listed above , the decision tree could also ask about the number of children and span of the node .","label":5,"label_text":"OWN"} +{"text":"As things stand the stochastic procedure is free to generate structures where,but, which are not in fact legal feature structures .","label":5,"label_text":"OWN"} +{"text":"Non-Linearity","label":4,"label_text":"BKG"} +{"text":"The disadvantage of backtracking is that partial results are thrown away which could be reused during further processing .","label":5,"label_text":"OWN"} +{"text":"Learning dependencies that exist between these generalized case frame slots .","label":4,"label_text":"BKG"} +{"text":"Each article in the ECD describes what is called a ` lexeme ' : a word in some specific reading .","label":6,"label_text":"OTH"} +{"text":"A key part of this model is that some types of evidence provide better support for beliefs than other types .","label":6,"label_text":"OTH"} +{"text":"As the QLF becomes more instantiated , the set of possible evaluations narrows towards a singleton .","label":5,"label_text":"OWN"} +{"text":"Letand","label":5,"label_text":"OWN"} +{"text":"The cost is a deficiency in modelling , since this takes no account of the fact that token identity of nodes is transitive .","label":5,"label_text":"OWN"} +{"text":"rules are applied cascadingly using the most accurate rules first .","label":5,"label_text":"OWN"} +{"text":"We present in this section the results of our research which has been implemented and in the next section , other directions which seems obviously promising .","label":0,"label_text":"TXT"} +{"text":"However , when this technique is applied to the ANLT grammar the increased overheads in rule invocation and structure building actually slow the parser down .","label":5,"label_text":"OWN"} +{"text":"the plan contains a description that is useful for making an identification plan that the hearer can execute to identify the referent , and","label":5,"label_text":"OWN"} +{"text":"Sanctioned non-coverage means that some number of special ` ignore ' rules have been applied which simulate coverage of input material lying between the islands , thus in effect making the islands contiguous .","label":5,"label_text":"OWN"} +{"text":"On a test suiteChanod and Tapanainen 1995extracted from the newspaper Le Monde ( 12 000 words ) tagged with either of our two taggers , we counted only three errors that violated gender agreement .","label":5,"label_text":"OWN"} +{"text":"We now apply some simplifying independence assumptions concerning relation graphs .","label":5,"label_text":"OWN"} +{"text":"Search parameters were set so that each execution took around 5 seconds on a Sun Sparc 10 .","label":5,"label_text":"OWN"} +{"text":"The task of the algorithm is to generate all such structures and to equip them with probabilities .","label":5,"label_text":"OWN"} +{"text":"For purposes of simplicity and because on the whole is it likely that words will contain no more than one errorDamerau 1964,Pollock and Zamora 1983, normal ` no error ' analysis usually resumes if an error rule succeeds .","label":5,"label_text":"OWN"} +{"text":"As such , the existential quantification inhas to be stipulated , whereas our analysis acquires this existential quantification ` for free ' .","label":1,"label_text":"CTR"} +{"text":"Of course it is possible to experiment with different ways of taking the context-free skeleton ( including as much information as possible \/ useful ) .","label":5,"label_text":"OWN"} +{"text":"Swedish","label":5,"label_text":"OWN"} +{"text":"Note thatspecifies local tree admissibility ( thus obviating the need for rewrite rules ) , and,andwork together to capture the effect ofand.","label":5,"label_text":"OWN"} +{"text":"To initiate the process , speaker A presents an initial version of a referring expression on which speaker B passes judgment .","label":6,"label_text":"OTH"} +{"text":"Both methods agree on how the 87 sites are distributed among these dialects .","label":5,"label_text":"OWN"} +{"text":"I begin by describing decision-tree modeling , showing that decision-tree models are equivalent to interpolated n-gram models .","label":0,"label_text":"TXT"} +{"text":"We found no instances of the first case ; although speakers did produce phrases like `` OK '' and then continue , the `` OK '' was always part of the same intonational contour as that further information and there was no break between the two , suggesting the phrase was a prefix and not a cue .","label":5,"label_text":"OWN"} +{"text":"There is a small set of 4 extensions to the grammar , or semi-local constraints .","label":5,"label_text":"OWN"} +{"text":"The algorithm and an example is reproduced below .","label":6,"label_text":"OTH"} +{"text":"This paper presents a model for engaging in collaborative negotiation to resolve conflicts in agents ' beliefs about domain knowledge .","label":2,"label_text":"AIM"} +{"text":"Figureshows the detail of this algorithm , wheredenotes the number of possible values assumed by, N the input data size , anddenotes the logarithm to the base 2 .","label":6,"label_text":"OTH"} +{"text":"This allows very much faster parsing and gives a lower error rate , at the price of a small loss in coverage .","label":5,"label_text":"OWN"} +{"text":"The operational semantics for-Prolog state thatis provable if and only if [ c \/ x ] G is provable , where c is a new variable of the same type as x that does not otherwise occur in the current signature .","label":5,"label_text":"OWN"} +{"text":"In order to represent grammar rules distributively , we adopt categorial grammar , where we can an attach local grammar rule to each word and phrase .","label":3,"label_text":"BAS"} +{"text":"The probability assigned by a cluster to an N-gram was taken to be the simple maximum likelihood ( relative frequency ) value where this was non-zero .","label":5,"label_text":"OWN"} +{"text":"If the energy differencebetween an old and a new state is less than the available energy , then the transition is accepted .","label":6,"label_text":"OTH"} +{"text":"Other criteria that should be studied are second and higher order statistics on the respective parameters .","label":5,"label_text":"OWN"} +{"text":"Should it make choices when there is more than one proposed anchor with the same ranking ?","label":5,"label_text":"OWN"} +{"text":"This model reduces the O () intractability of top-down approaches discussed above by dramatically reducing the number of binary partitions that are considered .","label":6,"label_text":"OTH"} +{"text":"This contrasts with unlimited right-recursion where there is no growth in state length :","label":5,"label_text":"OWN"} +{"text":"For example , the f-structure given inresults in two R-relations :","label":5,"label_text":"OWN"} +{"text":"Consider the French sentence :","label":5,"label_text":"OWN"} +{"text":"We contrast the account with relevant past work in Section, and conclude in Section.","label":0,"label_text":"TXT"} +{"text":"The differences in ambiguity types of the models also have effects on the types of errors produced by the German model .","label":5,"label_text":"OWN"} +{"text":"The fragmentation of the parse analysis","label":5,"label_text":"OWN"} +{"text":"Still , a lexicon is needed that specifies the possible parts of speech for every word .","label":1,"label_text":"CTR"} +{"text":"I is an interpretation,","label":5,"label_text":"OWN"} +{"text":"Such coincidental situations are very rare in FrenchEl-Bze 1993.","label":5,"label_text":"OWN"} +{"text":"Unknown words show a weak tendency to give higher accuracy on smaller tagsets .","label":5,"label_text":"OWN"} +{"text":"Our algorithm consistently outperformed the Inside-Outside algorithm in these experiments .","label":1,"label_text":"CTR"} +{"text":"That is , we state in which trees and feature structures are admissible , and how tree and feature based information is to be synchronised ; examples will be given shortly .","label":5,"label_text":"OWN"} +{"text":"It is possible to compromise here , in such a way that the parser is guaranteed to terminate , but sometimes misses a few parse-trees .","label":5,"label_text":"OWN"} +{"text":"The final result is the TNCB in figure, whose orthography is `` the big brown dog barked '' .","label":5,"label_text":"OWN"} +{"text":"Thus ,becomes a set of links :, whereis a link with thickness.","label":5,"label_text":"OWN"} +{"text":": source language word string","label":5,"label_text":"OWN"} +{"text":"The numbers presented in the previous section are intuitively unsatisfying .","label":5,"label_text":"OWN"} +{"text":"Part-of-speech tagging is of interest for a number of applications , for example access to text data basesKupiec 1993, robust parsingAbney 1991, and general parsingde Marcken 1990,Charniak et al. 1994.","label":4,"label_text":"BKG"} +{"text":"First we measure the accuracy of tagging solely on unknown words :","label":5,"label_text":"OWN"} +{"text":"Figureshows a graphical comparison of the two analysis models .","label":6,"label_text":"OTH"} +{"text":"Now the algorithm itself is presented :","label":6,"label_text":"OTH"} +{"text":"In the Inside-Outside algorithm , the gradient descent search discovers the `` nearest '' local minimum in the search landscape to the initial grammar .","label":6,"label_text":"OTH"} +{"text":"Notice that the temperature t is a parameter of the perturb function .","label":6,"label_text":"OTH"} +{"text":"A dendroid distribution can be represented by a dependency forest ( i.e. a set of dependency trees ) , whose nodes represent the random variables , and whose directed arcs represent the dependencies that exist between these random variables , each labeled with a number of parameters specifying the probabilistic dependency .","label":4,"label_text":"BKG"} +{"text":"The different approaches are called all-word vs. same-word feature string comparisons .","label":5,"label_text":"OWN"} +{"text":"Here we have a 2-place relation not , which is backed up by the following MP :","label":5,"label_text":"OWN"} +{"text":"In training a tagger for a given language , a major part of the knowledge engineering required can therefore be localised in the choice of the tagset .","label":4,"label_text":"BKG"} +{"text":"Definitions of these transition types appear in figure.","label":6,"label_text":"OTH"} +{"text":"InChater et al. 1994it is argued that the incrementally derived meanings are not judged for plausibility directly , but instead are first turned into existentially quantified propositions .","label":6,"label_text":"OTH"} +{"text":"It is therefore necessary to make various generalisations over the states , for example by ignoring the Rlists .","label":5,"label_text":"OWN"} +{"text":"( inducing a Parallel relation instead of Narration ) , a temporal ordering among the sentences is no longer implied .","label":5,"label_text":"OWN"} +{"text":"An interesting , albeit minor interest of not introducing gender distinction , is that there is then no problem with tagging phrases like mon allusion ( my allusion ) where the masculine form of the possessive determiner mon precedes a feminine singular noun that begins with a vowel , for euphonic reasons .","label":5,"label_text":"OWN"} +{"text":"The number of superfluous analyses can be reduced by imposing a local threshold level , of say 0.5 .","label":5,"label_text":"OWN"} +{"text":"A more distant goal is to ascertain whether the performance of the model can improve after parsing new texts and processing the data therein even without hand-correction of the parses , and if so what the limits are to such `` self-improvement '' .","label":5,"label_text":"OWN"} +{"text":"First I give a simple algorithm to encode any instance of a PCP as a pair , consisting of a FSA and an off-line parsable DCG , in such a way that the question whether there is a solution to this PCP is equivalent to the question whether the intersection of this FSA and DCG is empty .","label":5,"label_text":"OWN"} +{"text":"Since code words are used as compact indices into the lexicon , the original sample could be reconstructed completely by looking up each code word in this list and replacing it with its phoneme sequence from the lexicon .","label":5,"label_text":"OWN"} +{"text":"( one must apply the knowledge that Austrians speak German to correctly interpret the ellipsis ) .","label":5,"label_text":"OWN"} +{"text":"Unlike the within-language associations , which are rich and diverse , these between-language associations involve primarily translation equivalent terms that are experienced together frequently .","label":6,"label_text":"OTH"} +{"text":"Then the weights are fixed and the trained net is ready to classify unseen sentences .","label":5,"label_text":"OWN"} +{"text":"The use of terms and indices has parallels to proposals due toKehler and KampKehler 1993a,Gawron and Peters 1990.","label":3,"label_text":"BAS"} +{"text":"Base case lookup must be defined specifically for different grammatical theories and directions of processing by the predicate lookup\/2 , whose first argument is the goal and whose second argument is the selected base case .","label":6,"label_text":"OTH"} +{"text":"However , if coordination of the first two conjuncts occurs at this level , it is difficult to see how to deal with the final conjunct .","label":1,"label_text":"CTR"} +{"text":"Imagine a bag of signs , corresponding to `` the big brown dog barked '' , has been passed to the generation phase .","label":5,"label_text":"OWN"} +{"text":"The technique achieves a good coverage , even with few co-occurrence triples .","label":5,"label_text":"OWN"} +{"text":"For comparison , a variety of other data has been collected .","label":5,"label_text":"OWN"} +{"text":"Apart from the obvious benefits of automating the process , such as speed and accuracy , it could show up cases where there is more than one possible tone transcription , possibly with different parameter settings for the Fscaling function .","label":6,"label_text":"OTH"} +{"text":"Theoperator is applied to all possible lexicon-entry pairs and if a rule produced by such an application has already been extracted from another pair , its frequency count ( f ) is incremented .","label":5,"label_text":"OWN"} +{"text":"In fact there are infinitely many different trees possible .","label":1,"label_text":"CTR"} +{"text":"Such an approach allows for the treatment of missing , extraneous , interchanged or misused wordsTeitelbaum 1973,Saito and Tomita 1988,Nederhof and Bertsch 1994.","label":6,"label_text":"OTH"} +{"text":"There are two other ordering operators based on general ordering principles : the local focus principle and the proof time order principleHuang 1994b.","label":5,"label_text":"OWN"} +{"text":"Hobbshas some difficulties determining the function of this repetition , but we maintain that the function follows from the more general principles of the control rules : speakers signal that they wish to shift control by supplying no new propositional content .","label":1,"label_text":"CTR"} +{"text":"Given the verb v , the syntactic-relationship s and the candidate class c , the Association Score , Assoc , between v and c in s is defined :","label":6,"label_text":"OTH"} +{"text":"We could therefore allow compounds to be delivered as values of merged LF 's , eg := sleutelbos .","label":5,"label_text":"OWN"} +{"text":"It is an inherent feature of phrase structure grammars that they classify the strings of words from a language into two ( infinite ) sets , one containing the grammatical strings and the other the ungrammatical strings .","label":6,"label_text":"OTH"} +{"text":"This puts a strong restriction on the shape of semantic analysis rules : one of the leaves must share its semantic form with the root node .","label":6,"label_text":"OTH"} +{"text":"The program ensures that the same area of the search space is not re-explored by subsequent searches .","label":5,"label_text":"OWN"} +{"text":"The results using distance vectors are shown by dots () , and using co-occurrence vectors from the 1987 WSJ ( 20 M words ) by circles () .","label":5,"label_text":"OWN"} +{"text":"We claim that natural languages can be considered as a trace of these representations , in which it is possible , with systematic and detailled linguistic studies , to light up the way spatiotemporal properties are represented and on which basic concepts these representations lie .","label":5,"label_text":"OWN"} +{"text":"Using the previous method is not possible because the sentence is a too small unit to converge .","label":4,"label_text":"BKG"} +{"text":"But if a program is to catch such errors very soon after they are entered , it will have to operate with less than the complete sentence .","label":5,"label_text":"OWN"} +{"text":"An optional item in a form is denoted by surrounding it with square brackets ` [ ... ] ' .","label":5,"label_text":"OWN"} +{"text":"113,583 ( 93.1 % ) could be correctly mapped into their corresponding lemma form .","label":5,"label_text":"OWN"} +{"text":"We found that there were 266 verbs , whose ` arg 2 ' slot is dependent on some of the other preposition slots .","label":5,"label_text":"OWN"} +{"text":"Here , it would appear , only one reading is possible , i.e. the one where John gave Mary her slice of pizza just after she stared or started to stare at him .","label":5,"label_text":"OWN"} +{"text":"Furthermore , the only free parameters in our search are the parameters; all other symbols ( except S ) are fixed to expand uniformly .","label":5,"label_text":"OWN"} +{"text":"In, the second sentence is an elaboration of the first , and they therefore refer to aspects of the same event rather than to two sequential events .","label":1,"label_text":"CTR"} +{"text":"be the sets of all verbs , nouns , syntactic positions , and possible noun classes , respectively .","label":5,"label_text":"OWN"} +{"text":"All three of the parsers have theoretical worst-case complexities that are either exponential , or polynomial on grammar size but with an extremely large multiplier .","label":5,"label_text":"OWN"} +{"text":"I presented instrumental data from Bamileke Dschang and showed how the function could be specialised for this language .","label":5,"label_text":"OWN"} +{"text":"They can be divided into the thirty four categories which are exemplified in Table.","label":5,"label_text":"OWN"} +{"text":"Hobbsalso assumes that his algorithm can somehow collect discourse entities mentioned alone into sets as co-specifiers of plural anaphors .","label":6,"label_text":"OTH"} +{"text":"For instance , superlative adjectives can act as nouns , so they are initially given the 2 tags : noun or adjective .","label":5,"label_text":"OWN"} +{"text":"In particular , the model we use in our experiments has noun clusters with cluster memberships determined byand centroid distributions determined by.","label":5,"label_text":"OWN"} +{"text":"With respect to a sophisticated output , we aim to combine VM-GEN with a flexible repair component .","label":5,"label_text":"OWN"} +{"text":"Both rules receive an index :","label":5,"label_text":"OWN"} +{"text":"In particular , the question of internal versus external criteria for tagset design is considered , with the general conclusion that external ( linguistic ) criteria should be followed .","label":2,"label_text":"AIM"} +{"text":"They are considered to consist of two parts -- the base and the collocate .","label":6,"label_text":"OTH"} +{"text":"If both the belief and relationship were accepted by the evaluator , the search on the current branch will terminate , since once the system accepts a belief , it is irrelevant whether it accepts the user 's support for that beliefYoung et al. 1994.","label":5,"label_text":"OWN"} +{"text":"To extract such rules a special operatoris applied to every pair of words from the lexicon .","label":5,"label_text":"OWN"} +{"text":"For example , in sentencebelow , there seems a preference for an outer scope reading for the first quantifier as soon as we interpret child .","label":4,"label_text":"BKG"} +{"text":"Learning process","label":6,"label_text":"OTH"} +{"text":"The method was systematically evaluated on the Brown corpus .","label":5,"label_text":"OWN"} +{"text":"The initial grammar is listed in Table.","label":5,"label_text":"OWN"} +{"text":"This predicts that entities realized in subject position are more salient , since even if an adjunct clause linearly precedes the main subject , any noun phrases within it will be deeper in the parse tree .","label":6,"label_text":"OTH"} +{"text":"Mellish 1989introduced some chart-based techniques using only syntactic information for extragrammatical sentences .","label":6,"label_text":"OTH"} +{"text":"We used these data as input to the learning algorithm and acquired case frame patterns for each of the 100 verbs .","label":5,"label_text":"OWN"} +{"text":"ThoughGerdemann 1991showed how to modify the restriction function to make top-down information available for the bottom-up completion step , Earley generation with top-down prediction still has a problem in that generating the subparts of a construction in the wrong order might lead to massive nondeterminacy or even nontermination .","label":1,"label_text":"CTR"} +{"text":"The complexity of this phase is therefore the product of the picking and combining complexities , i.e..","label":5,"label_text":"OWN"} +{"text":"In the approach to discourse structure developed inSidner 1983andGrosz et al. 1986, a discourse exhibits both global and local coherence .","label":6,"label_text":"OTH"} +{"text":"We conducted a similar analysis for those cue words that have been identified in the literature .","label":5,"label_text":"OWN"} +{"text":"If we have more compound nouns as knowledge , we could use a finer hierarchy level .","label":5,"label_text":"OWN"} +{"text":"Let N be a node in the current tree description .","label":5,"label_text":"OWN"} +{"text":"In contrast , statistical techniques using lexeme co-occurrence provide a relatively simple mechanism which can imitate semantic filtering in many cases .","label":6,"label_text":"OTH"} +{"text":"Our lexicon is based on a finite-state transducer lexiconKarttunen et al. 1992.","label":3,"label_text":"BAS"} +{"text":"Alternatively , and more practically , it would be possible to define a similarity measure between bigrams as a function of similarities between corresponding words in them .","label":5,"label_text":"OWN"} +{"text":"Our model is capable of selecting the most effective aspect to address in its pursuit of conflict resolution in cases where multiple conflicts arise , and of selecting appropriate evidence to justify the need for such modification .","label":5,"label_text":"OWN"} +{"text":"The magnitude of the speed-up is less than might be expected , given the enthusiastic advocation of non-deterministic CF LR parsing for NL by some researchersTomita 1987,Wright et al. 1991, and in the light of improvements observed for predictive over pure bottom-up parsingMoore and Dowding 1991.","label":5,"label_text":"OWN"} +{"text":"When Select-Focus-Modification is applied toTeaches(Smith,AI) , the algorithm will first be recursively invoked on On-Sabbatical(Smith,next year) to determine the focus for modifying the child belief ( stepin Figure) .","label":5,"label_text":"OWN"} +{"text":"However , if a third sentence is added , an ambiguity results .","label":5,"label_text":"OWN"} +{"text":"This means that , in cases such as, where , on the globally acceptable reading , the PP is an adjunct of the NP the man , this attachment will have to be revised , and the PP retrospectively adjoined into the relevant N ' node .","label":5,"label_text":"OWN"} +{"text":"In contrast ,","label":5,"label_text":"OWN"} +{"text":"This is then generated as.","label":5,"label_text":"OWN"} +{"text":"In this paper , we present a robust parser with a recovery mechanism .","label":2,"label_text":"AIM"} +{"text":"Brill and Marcus 1992ahave shown that the effort necessary to construct the part-of-speech lexicon can be considerably reduced by combining learning procedures and a partial part-of-speech categorization elicited from an informant .","label":6,"label_text":"OTH"} +{"text":"They did this either by repeating what had just been said ( 6 occasions ) , or by giving a summary of what they had said in the preceding utterances of the phase ( 9 occasions ) .","label":5,"label_text":"OWN"} +{"text":"Through reordering the right-hand sides of the rules in the grammar the amount of nondeterminism can be drastically reduced as shown inMinnen et al. 1996.","label":3,"label_text":"BAS"} +{"text":"ABDICATIONS correspond to those cases where the controller produces a prompt as the last utterance of the segment .","label":6,"label_text":"OTH"} +{"text":"This state is appropriate as the initial state for a parse of both directly , or of after 3 pm through my secretary , resulting in a final state of category sentence .","label":5,"label_text":"OWN"} +{"text":"Other types of conditional cooccurrence probabilities have been used in probabilistic parsingBlack et al. 1993.","label":6,"label_text":"OTH"} +{"text":"We did , however , find instances of the second case : twice following prompts and once following a summary , there was a long pause , indicating that the speaker was not ready to respond .","label":5,"label_text":"OWN"} +{"text":"We proposed some evaluation measures for the SRs learning task .","label":2,"label_text":"AIM"} +{"text":"The algorithm works as follows : A procedure SCAN is carried out for each state in.","label":6,"label_text":"OTH"} +{"text":"Concerning the PPs , unattached prepositions involve empty or unfilled roles in the Conceptual Structures ( CSs ) , expressed in a frame-based languageZarri 1992.","label":3,"label_text":"BAS"} +{"text":"Note that the coverage of ` MDL-Thesaurus ' significantly outperformed that of ` Word-Based , ' while basically maintaining high accuracy ( though it drops somewhat ) , indicating that using an automatically constructed thesaurus can improve disambiguation results in terms of coverage .","label":1,"label_text":"CTR"} +{"text":"According to the definition of unit element ,.","label":5,"label_text":"OWN"} +{"text":"Morphological analysis is first called with the assumption that the word is free of errors .","label":5,"label_text":"OWN"} +{"text":"For example , when backup compiler disk is encountered , the analysis will be :","label":6,"label_text":"OTH"} +{"text":"We start by using features similar to those first investigated byBiber, but we concentrate on those that are easy to compute assuming we have a part of speech taggerCutting et al. 1992,Church 1988, such as such as third person pronoun occurrence rate as opposed to 'general hedges 'Biber 1989.","label":3,"label_text":"BAS"} +{"text":"In order to solve point d above , we have foreseen two possibilities :","label":5,"label_text":"OWN"} +{"text":"Insertions add extra information , usually modifiers e.g.","label":4,"label_text":"BKG"} +{"text":"The parser was trained on the first 30,800 sentences from the Lancaster treebank .","label":5,"label_text":"OWN"} +{"text":"This can be done directly , by measuring the distance between distributions of the form, corresponding to different bigrams.","label":5,"label_text":"OWN"} +{"text":"The experimental result shows 68 % - 77 % accuracy in error recovery .","label":5,"label_text":"OWN"} +{"text":"From a purely strategic point of view , the agent may have no interest in whether the stranger 's goals are met .","label":4,"label_text":"BKG"} +{"text":"seeKaplan and Kay 1994for an exposition of the mathematical basis .","label":4,"label_text":"BKG"} +{"text":"Detailed guidelines on the use of the individual tags are available inThielen and Sailer 1994.","label":5,"label_text":"OWN"} +{"text":"However , when looking for a pronoun 's antecedent within a sentence , it will go sequentially further and further up the tree to the left of the pronoun , and that failing will look in the previous sentence .","label":6,"label_text":"OTH"} +{"text":"The first reading involves scoping the book quantifier before ellipsis resolution .","label":6,"label_text":"OTH"} +{"text":"With the description length of a model defined in the above manner , we wish to select a model having the minimum description length and output it as the result of clustering .","label":5,"label_text":"OWN"} +{"text":"This predicts that any substring of a sentence can coordinate with itself , and hence that any substring of a sentence can act as a conjunct .","label":5,"label_text":"OWN"} +{"text":"As a semantic representation of words , distance vectors are expected to depend very weakly on the particular source dictionary .","label":6,"label_text":"OTH"} +{"text":"Larger tagset tends to give larger accuracy , though with less of a spread than for Swedish .","label":5,"label_text":"OWN"} +{"text":"Some denote a change of position which always occur ( voyager-to travel ) .","label":5,"label_text":"OWN"} +{"text":"( with an equivalent vertical alignment , henceforth to be used in this paper , on the right ) .","label":5,"label_text":"OWN"} +{"text":"And my own treatment of selectional constraintsResnik 1993provides a way to describe the plausibility of co-occurrence in terms of WordNet 's semantic categories , using co-occurrence relationships mediated by syntactic structure .","label":6,"label_text":"OTH"} +{"text":"A default description defines a set of operations that should be carried out in a certain situation where the generation process can not be continued .","label":5,"label_text":"OWN"} +{"text":"This requires that we by transitivity equate the nodes that are dominated by a cutnode in a structurally equivalent way ; if there is a path from a cutnodeto a nodeand a path from a cutnodeto a nodewith an identical sequence of labels , the two nodesandmust be equated .","label":5,"label_text":"OWN"} +{"text":"Simple techniques to decide the best sense c given the target noun n using estimates of the n-grams :,,and, obtained from supervised and un-supervised corpora .","label":5,"label_text":"OWN"} +{"text":"The corpus used in our first experiment was derived from newswire text automatically parsed byHindle's parser FidditchHindle 1993.","label":3,"label_text":"BAS"} +{"text":"These words are added to the lexicon at the end of first iteration when re-estimation is being used , so that the probabilities of their hypotheses subsequently diverge from being uniform .","label":5,"label_text":"OWN"} +{"text":"Performance statistics are based on 1,200 executions of each program .","label":5,"label_text":"OWN"} +{"text":"Early work in the field relied on a corpus which had been tagged by a human annotator to train the model .","label":6,"label_text":"OTH"} +{"text":"The claim is that shifts of control often do not occur until the controller indicates the end of a discourse segment by abdicating or producing a repetition \/ summary .","label":6,"label_text":"OTH"} +{"text":"Let us for the moment consider the case where the language model consists only of a unigram probability distribution for the words in the vocabulary , with no N-gram ( for N > 1 ) or fuller linguistic constraints considered .","label":4,"label_text":"BKG"} +{"text":"In a strongly co-operative domain , such as TRAINS , the system can subordinate working on its own goals to locally working on concerns of the user , without necessarily having to have any shared discourse plan .","label":5,"label_text":"OWN"} +{"text":"We have made this choice because we want to be able to analyse the performance of the algorithms across different domains .","label":5,"label_text":"OWN"} +{"text":"Each rule name is turned into the name of a Prolog clause , and additional arguments are added to it .","label":5,"label_text":"OWN"} +{"text":"He then usesHobbsalgorithm to produce an ordering of these ISCs .","label":6,"label_text":"OTH"} +{"text":"We have realized a systematic and fine linguistic study on these verbs , looking carefully at each of them , one by one ( and we have 440 CoL verbs in French ) , in order to extract their intrinsic spatiotemporal properties .","label":5,"label_text":"OWN"} +{"text":"The goal is then to define a membership functionthat takes,, and W as its arguments and computes a value in [ 0,1 ] , representing the confidence with which one can state that sensebelongs in sense grouping W ' .","label":2,"label_text":"AIM"} +{"text":"There are hardly any distributional clues for distinguishing `` VBN '' and `` PRD '' since both are mainly used as complements of `` to be '' .","label":5,"label_text":"OWN"} +{"text":"The syntactic and semantic production rules for deriving the feminine singular of a French adjective by suffixation with `` e '' are given , with some details omitted , in Figure.","label":5,"label_text":"OWN"} +{"text":"a factor favouring left-branching which arises from the formal dependency construction ; and","label":6,"label_text":"OTH"} +{"text":"For example , in a variant of the English tagger which was not used in these experiments , a module which reduces the range of possible tags based on testing for only seven surface characteristics such as capitalisation and word endings improved the unknown word accuracy by 15 - 20 .","label":5,"label_text":"OWN"} +{"text":"For start states , the relation start\/1 should hold , and for final states the relation final\/1 should hold .","label":5,"label_text":"OWN"} +{"text":"Trying to use the event times would give the wrong analysis .","label":5,"label_text":"OWN"} +{"text":"Evaluating SPATTER against the Penn Treebank Wall Street Journal corpus using the PARSEVAL measures , SPATTER achieves 86 % precision , 86 % recall , and 1.3 crossing brackets per sentence for sentences of 40 words or less , and 91 % precision , 90 % recall , and 0.5 crossing brackets for sentences between 10 and 20 words in length .","label":5,"label_text":"OWN"} +{"text":"ifthen output.","label":5,"label_text":"OWN"} +{"text":"In this study , not only has the technique proved its worth by supporting generality , but through generalisation of training information it outperforms the equivalent lexical association approach given the same information .","label":1,"label_text":"CTR"} +{"text":"A semantic hierarchy ( WordNet ) where words are clustered in semantic classes , and semantic classes are organized hierarchically .","label":6,"label_text":"OTH"} +{"text":"What is the tag of the word two words back ?","label":6,"label_text":"OTH"} +{"text":"says that I didn't borrow any of your other possessions ,says that I didn't borrow anyone else 's car , andsays that I didn't do anything else to your car .","label":4,"label_text":"BKG"} +{"text":"However , if our goal is to specify large scale grammars in a clear , unambiguous manner , and to do so in such a way that our grammatical analyses are machine verifiable , then the use of formal specification languages has obvious advantages .","label":5,"label_text":"OWN"} +{"text":"how assumptions are cancelled when they turn out to be inconsistent with newly arriving input ( see section) .","label":0,"label_text":"TXT"} +{"text":"The most important ones are that the Brown Corpus provides a model of general multi-domain language use , so general language regularities can be induced from it , and second , many taggers come with data trained on the Brown Corpus which is useful for comparison and evaluation .","label":5,"label_text":"OWN"} +{"text":"For bigrams the resulting estimator has the general form","label":6,"label_text":"OTH"} +{"text":"And we have not yet found a case where the distance vectors give higher precision .","label":5,"label_text":"OWN"} +{"text":"We then view the problem of clustering words as that of estimating a probabilistic model ( representing probability distribution ) that generates such data .","label":5,"label_text":"OWN"} +{"text":"The agent architecture for deliberation and means-end reasoning is based on the IRMA architecture , also used in the TileWorld simulation environmentPollack and Ringuette 1990, with the addition of a model of limited Attention \/ Working memory , AWM .","label":3,"label_text":"BAS"} +{"text":"Paradigme provides the former dimension -- an associative system of words -- as a screen onto which the meaning of a word is projected like a still picture .","label":5,"label_text":"OWN"} +{"text":"The treatment is easily implementable , and forms the basis of the ellipsis resolution component currently used within the Core Language EngineAlshawi et al. 1992.","label":3,"label_text":"BAS"} +{"text":"if n > 0 , andare wffs , then so is.","label":5,"label_text":"OWN"} +{"text":"COMMANDS :","label":6,"label_text":"OTH"} +{"text":"as the optimal evaluation order .","label":5,"label_text":"OWN"} +{"text":"`` Implicit '' is a stylistic feature value , indicating that the splitting of the proof into the three subgoals is not made explicit .","label":5,"label_text":"OWN"} +{"text":"Only examplesare constituent coordinations .","label":4,"label_text":"BKG"} +{"text":"Propose any NP or S node encountered as the antecedent .","label":6,"label_text":"OTH"} +{"text":"An action that might occur or not-occur according to R is neither obligatory nor forbidden .","label":5,"label_text":"OWN"} +{"text":"Free combination :","label":5,"label_text":"OWN"} +{"text":"The above decomposition can be written in a more symmetric form as","label":5,"label_text":"OWN"} +{"text":"While these changes are motivated by the dependency model , I have also applied them to the adjacency model for comparison .","label":5,"label_text":"OWN"} +{"text":"Since a class-based model tends to have more than 100 parameters usually , the current data size available in the Penn Tree Bank ( See Table) is not enough for accurate estimation of the dependencies within case frames of most verbs .","label":5,"label_text":"OWN"} +{"text":"The merge in the second step of the algorithm is chosen to be the one minimizing the increase in entropy between the unmerged and the merged clusters .","label":5,"label_text":"OWN"} +{"text":"3 stages :","label":6,"label_text":"OTH"} +{"text":"From this scoring , two values were computed :","label":5,"label_text":"OWN"} +{"text":"If no suggestion was made , then she can expand the plan according to her own beliefs about the referent 's attributes and their salience .","label":5,"label_text":"OWN"} +{"text":"This algorithm gives the correct results in examples such as the following :","label":6,"label_text":"OTH"} +{"text":"Just like in case ofShieber's bottom-up generator , bottom-up evaluation of magic-compiled grammars produced with this Magic variant is only guaranteed to be complete in case the original grammar obeys the semantic monotonicity constraint .","label":1,"label_text":"CTR"} +{"text":"If we interpret the S , O and V as Subject , Object and Verb we can observe an equivalence between the structures with the bracketings :,,,.","label":5,"label_text":"OWN"} +{"text":"The mean number of turns in each phase was 8.03 .","label":5,"label_text":"OWN"} +{"text":"These propositions are known as the COMMON GROUNDLewis 1969,Grice 1967.","label":6,"label_text":"OTH"} +{"text":"The general processing model therefore consists of transitions of the form :","label":5,"label_text":"OWN"} +{"text":"The right-context neighbors all take `` to '' - infinitives as complements .","label":5,"label_text":"OWN"} +{"text":"In case of subjective adjective without garu , the constraint ` motivated = experiencer ' holds also for type 1 except for the case where directionally auxiliary verb `` yaru ( give ) '' , `` kureru ( be given ) '' are used .","label":5,"label_text":"OWN"} +{"text":"As for co-occurrence vectors , the precision levels off near a dimension of 100 .","label":5,"label_text":"OWN"} +{"text":"Output","label":5,"label_text":"OWN"} +{"text":"In the absence of any constraints , however , the number of parameters in each of the above three models is exponential ( even the slot-based model hasparameters ) , and thus it is infeasible to estimate them in practice .","label":5,"label_text":"OWN"} +{"text":"In the current implementation of LHIP , compiled rules are interpreted depth-first and left-to-right by the standard Prolog theorem-prover , giving an analyser that proceeds in a top-down , ` left-head-corner ' fashion .","label":5,"label_text":"OWN"} +{"text":"In the recent literatureGrishman and Sterling 1992,Resnik 1993several task oriented schemes to test Selectional Restrictions ( mainly on syntactic ambiguity resolution ) have been proposed .","label":6,"label_text":"OTH"} +{"text":"The present concern is not with whether there might be a grammar of discourse that determines this structure , or whether it is derived from the cues that cooperative speakers give hearers to aid in processing .","label":6,"label_text":"OTH"} +{"text":"However , for single layer nets we can choose to update weights directly : the error at an output node can trigger weight updates on the connections that feed it .","label":5,"label_text":"OWN"} +{"text":"a global threshold level may be set to determine the minimum fraction of spanned input that may be covered in a parse , and","label":5,"label_text":"OWN"} +{"text":"We compared the two vector representations by using them for the following two semantic tasks .","label":5,"label_text":"OWN"} +{"text":"But , no rule can ever exist in free style texts .","label":5,"label_text":"OWN"} +{"text":"This is the basis of our hypothesis that the distances in the reference network reflect the associative distances between wordsNitta 1993.","label":6,"label_text":"OTH"} +{"text":"But this view emerges naturally from our treatment of substitutions , and is arguably a more natural characterisation of the phenomena .","label":5,"label_text":"OWN"} +{"text":"The pattern type of the vocalism clashes with the broken plural pattern that the root expects .","label":5,"label_text":"OWN"} +{"text":"insertion-error hypothesis :","label":6,"label_text":"OTH"} +{"text":"It should always be positive , and asymptotic to maximum and minimum bounds .","label":5,"label_text":"OWN"} +{"text":"Our experimental results indicate that for certain classes of verbs , the accuracy achieved in a disambiguation experiment is improved by using the acquired knowledge of dependencies .","label":5,"label_text":"OWN"} +{"text":"I writefor.","label":5,"label_text":"OWN"} +{"text":"If an utterance has not been understood , or is believed to be deficient in some way , this brings about an obligation to repair the utterance .","label":5,"label_text":"OWN"} +{"text":"In short a subjective predicate describes the experiencer 's inner state which can exclusively be known by the experiencer him \/ herself .","label":5,"label_text":"OWN"} +{"text":"The score of the resulting rule will be higher than the scores of the merged rules since the number of positive observations increases and the number of the trials remains the same .","label":5,"label_text":"OWN"} +{"text":"The resulting classification was applied to all tokens in the Brown corpus .","label":5,"label_text":"OWN"} +{"text":"In this analysis , sites that used some form of the word bulln , with the suffix - n , were distinguished from those using the suffix - g .","label":6,"label_text":"OTH"} +{"text":"This paper proposes a method for measuring semantic similarity between words as a new tool for text analysis .","label":2,"label_text":"AIM"} +{"text":"Such structures can straightforwardly be thought of as models , in the usual sense of first order model theoryHodges 1993.","label":5,"label_text":"OWN"} +{"text":"A system under construction is outlined which incorporates morphological checks ( using new two-level error rules ) over a directed letter graph , tag positional trigrams and partial parsing .","label":2,"label_text":"AIM"} +{"text":"As evidence for these claims , I present experimental results showing how , for a particular task and training corpus , clustering produces a sizeable improvement in unigram - and bigram-based models , but not in trigram-based ones ; this is consistent with experience in the speech understanding community that while moving from bigrams to trigrams usually produces a definite payoff , a move from trigrams to 4-grams yields less clear benefits for the domain in question .","label":5,"label_text":"OWN"} +{"text":"Do bottom-up or top-down techniques work best ?","label":2,"label_text":"AIM"} +{"text":"The figure shows that sentence four and five have penalty score three , that sentence three has two , that sentence one and two have one , and that sentence six has no penalty score .","label":5,"label_text":"OWN"} +{"text":"Thus , the coarseness or fineness of clustering also determines the degree of smoothing .","label":5,"label_text":"OWN"} +{"text":"Our system improves upon this simple size metric by computing sizes based on a compact representation motivated by information theory .","label":5,"label_text":"OWN"} +{"text":"For the slot-based model , sometimes case slots are found to be dependent .","label":5,"label_text":"OWN"} +{"text":"Cut up the training examples by matching them against the and-or tree and cutting at the determined cutnodes .","label":5,"label_text":"OWN"} +{"text":"When-clauses , for example , introduce a new reference time , which is ordered after the events described in the preceding discourse .","label":6,"label_text":"OTH"} +{"text":"u is an object in I , and","label":5,"label_text":"OWN"} +{"text":"In this example , one of the possible accounts for this interpretation is the following .","label":4,"label_text":"BKG"} +{"text":"The PCA","label":5,"label_text":"OWN"} +{"text":"And looking beyond Ireland , many have commented that the language of Ulster in general is similar to that of Scotland .","label":4,"label_text":"BKG"} +{"text":"Empty constituents in the syntax are not in themselves referential , but are recovered during Common Topic inference .","label":5,"label_text":"OWN"} +{"text":"In the experiments below the accuracy of such a system is measured .","label":5,"label_text":"OWN"} +{"text":"The head-corner generator invan Noord 1993is an illustrative instance of a sophisticated combination of top-down prediction and bottom-up structure building , see Fig..","label":6,"label_text":"OTH"} +{"text":"There does not appear to be an English source of this kind , so it is planned to compile one .","label":5,"label_text":"OWN"} +{"text":"The first problem , asGaston ParisnotedDurand 1889, is that isoglosses rarely coincide .","label":1,"label_text":"CTR"} +{"text":"Support for bidirectional parsingSatta and Stock to appearis another candidate for inclusion in a later version .","label":5,"label_text":"OWN"} +{"text":"The observed perplexityof a language model with respect to an ( imaginary ) infinite test sequenceis defined through the formulaJelinek 1990.","label":4,"label_text":"BKG"} +{"text":"The use of this combinator assimilates cases of noncoordinate RNR to cases involving parasitic gaps .","label":6,"label_text":"OTH"} +{"text":"The first role can be captured using syntactic types , where each type corresponds to a potentially infinite number of partial syntax trees .","label":5,"label_text":"OWN"} +{"text":"Descriptions of French , Polish and English inflectional morphology have been developed for it , and I show how various aspects of the mechanism allow phenomena in these languages to be handled .","label":5,"label_text":"OWN"} +{"text":"control phases ;","label":5,"label_text":"OWN"} +{"text":"A stateset, where i is the position of the input , is an ordered set of states .","label":6,"label_text":"OTH"} +{"text":"Conceptual association outperforms lexical association , presumably because of its ability to generalise .","label":5,"label_text":"OWN"} +{"text":"For example in","label":5,"label_text":"OWN"} +{"text":"Hobbsordering of entities from a previous utterance varies fromBrennan et al.in that possessors come before case-marked objects and indirect objects , and there may be some other differences as well but none of them were relevant to the analysis that follows .","label":6,"label_text":"OTH"} +{"text":"As mentioned in section, there are cases where this process stops ( caused by underspecification of the input ) before finishing its output .","label":5,"label_text":"OWN"} +{"text":"Its inclusion violates the rule of Don't tell people facts that they already know .","label":6,"label_text":"OTH"} +{"text":"A general algorithm for least-errors recognition , which is based only on syntactic information , was proposed byG. Lyonto deal with the extragrammaticality .","label":6,"label_text":"OTH"} +{"text":"The most common types of meaning postulates in R are those for restriction , hyponymy , and disjointness , expressed as follows :","label":5,"label_text":"OWN"} +{"text":"The model is trained using a set of initial transition biases , including both positive and negative constraints on tag sequences .","label":5,"label_text":"OWN"} +{"text":"If there is such a discrepancy , the interruption is a necessary contribution to a collaborative plan , not a distraction from the joint activity .","label":5,"label_text":"OWN"} +{"text":"Doctors are minimally similar to medicine and hospitals , since these things are all instances of `` something having concrete existence , living or nonliving '' ( WordNet class entity ) , but they are much more similar to lawyers , since both are kinds of professional people , and even more similar to nurses , since both are professional people working specifically within the health professions .","label":5,"label_text":"OWN"} +{"text":"A default classifier is stored stored in the dictionary for uncountable nouns and pluralia tanta .","label":5,"label_text":"OWN"} +{"text":"Similarity is computed by spreading activation ( or association )Waltz and Pollack 1985on a semantic network constructed systematically from an English dictionary .","label":5,"label_text":"OWN"} +{"text":"For example , if we assume that each edge in the FSA is associated with a probability it is possible to define a threshold such that each partial result that is derived has a probability higher than the threshold .","label":5,"label_text":"OWN"} +{"text":"I will attempt to argue that the important distinction is not so much a rational-empirical or symbolic-statistical distinction but rather a qualitative-quantitative one .","label":0,"label_text":"TXT"} +{"text":"When planning , an agent considers both its goals and obligations in order to determine an action that addresses both to the extent possible .","label":5,"label_text":"OWN"} +{"text":"However it is difficult to exclude these using syntactic constraints , without also excluding the more acceptable :","label":5,"label_text":"OWN"} +{"text":"Notice that this grammar models a sentence as a sequence of independently generated nonterminal symbols .","label":5,"label_text":"OWN"} +{"text":"Thus achieving mutual belief of understanding is an instance of the type of activity that agents must perform as they collaborate to achieve the purposes of the dialogue .","label":5,"label_text":"OWN"} +{"text":"app encodes application , and so in the derivation of harry found , the type-raised harry has the-Prolog value.","label":5,"label_text":"OWN"} +{"text":"Figureshows the resulting accuracy , with accuracy values from figuredisplayed with dotted lines .","label":5,"label_text":"OWN"} +{"text":"A `` semantic distance '' rating between the new DCU and each previous thread is determined .","label":5,"label_text":"OWN"} +{"text":"The notion of obligation has been studied for many centuries , and its formal aspects are examined using Deontic Logic .","label":5,"label_text":"OWN"} +{"text":"We expect that this would help the system find word boundaries for reasons detailed inChurch 1987-- in brief , that allophonic variation may be quite useful in predicting word boundaries .","label":5,"label_text":"OWN"} +{"text":"Note that since thesubset of L already accounts for the cfr. languages , this observation extends to SDL- .","label":5,"label_text":"OWN"} +{"text":"Two algorithms are commonly used , known as the Forward-Backward ( FB ) and Viterbi algorithms .","label":4,"label_text":"BKG"} +{"text":"and where we denote the training data as O , for observations .","label":4,"label_text":"BKG"} +{"text":"If we now remove the assumption made earlier that there is exactly one r-dependent of a head , we need to elaborate the derivation model to include choosing the number of such dependents .","label":5,"label_text":"OWN"} +{"text":"An extension to crossover allows more than one crossing point .","label":6,"label_text":"OTH"} +{"text":"The analysis accounts for the range of data given inKehler 1993b, although one point of departure exists between that account and the current one with respect to clauses conjoined with but .","label":1,"label_text":"CTR"} +{"text":"In this paper , we describe an experiment on fully automatic derivation of the knowledge necessary for part-of-speech tagging .","label":2,"label_text":"AIM"} +{"text":"Of theseCarter's rule clearly gets 5 , and another 3 seem to rest on whether one might want to establish a discourse entity from a previous utterance .","label":5,"label_text":"OWN"} +{"text":"The likelihood for this to happen by chance decreases drastically with increased rule length .","label":5,"label_text":"OWN"} +{"text":"Now , in natural language negative correlations are an important source of information : the occurrence of some words or groups of words inhibit others from following .","label":5,"label_text":"OWN"} +{"text":"Regardless , more than 90 % of the correct answers are within the second rank .","label":5,"label_text":"OWN"} +{"text":"Using this definition , an n-gram model can be represented by a decision-tree model with n - 1 questions .","label":6,"label_text":"OTH"} +{"text":"English","label":5,"label_text":"OWN"} +{"text":"The algorithm builds the set of all possible interpretations for a given utterance , using a generalization of the semantic tableau technique .","label":5,"label_text":"OWN"} +{"text":"`` Since a is an element of, andis a subset of, a is an element ofaccording to the definition of subset . ''","label":5,"label_text":"OWN"} +{"text":"e.g. `` So my question is ... ''","label":5,"label_text":"OWN"} +{"text":"However , here let 's define an n-gram model more loosely as a model which defines a probability distribution on a random variable given the values of n - 1 random variables ,There is no assumption in the definition that any of the random variables F orrange over the same vocabulary .","label":6,"label_text":"OTH"} +{"text":"This entry is where the variable $ strong points to .","label":5,"label_text":"OWN"} +{"text":"The following is based on what we noticed to be useful during the developing the taggers .","label":5,"label_text":"OWN"} +{"text":"Instead , each word is modeled by its own specific class , a set of words which are most similar to it ( as in k-nearest neighbor approaches in pattern recognition ) .","label":6,"label_text":"OTH"} +{"text":"Sequences of voyells or consonants :","label":5,"label_text":"OWN"} +{"text":"Cohenproposed a framework for analyzing the structure of argumentative discourseCohen 1987, yet did not provide a concrete identification procedure for ` evidence ' relationships between sentences , where no linguistic clues indicate the relationships .","label":1,"label_text":"CTR"} +{"text":"First , unlikeSussna's proposal , this algorithm aims to disambiguate groupings of nouns already established ( e.g. by clustering , or by manual effort ) to be related , as opposed to groupings of nouns that happen to appear near each other in running text ( which may or may not reflect relatedness based on meaning ) .","label":1,"label_text":"CTR"} +{"text":"Thirdly , the distribution of interruptions and summaries differs across dialogue types .","label":5,"label_text":"OWN"} +{"text":"In this case we obtain :","label":6,"label_text":"OTH"} +{"text":"A recent Microsoft product keeps a record of personal habitual mistakes .","label":5,"label_text":"OWN"} +{"text":"This local threshold value overrules the global threshold .","label":5,"label_text":"OWN"} +{"text":"We extract kanzi character sequences from newspaper editorials and columns and encyclopedia text , which has no overlap with the training corpus : 954 compound nouns consisting of four kanzi characters , 710 compound nouns consisting of five kanzi characters , and 786 compound nouns consisting of six kanzi characters are manually extracted from the set of the above kanzi character sequences .","label":5,"label_text":"OWN"} +{"text":"It is interpreted ( literally ) as the initiation of an inform about an obligation to perform a domain action ( shipping the oranges ) .","label":5,"label_text":"OWN"} +{"text":"The question of appropriate network architecture is examined inPao 1989,Widrow and Lehr 1992andLyon 1994.","label":5,"label_text":"OWN"} +{"text":"shows the percentage of correct answers ranked lower or equal to 4th place .","label":5,"label_text":"OWN"} +{"text":"We reproduce this algorithm in full in the appendix along with an example .","label":0,"label_text":"TXT"} +{"text":"This splitting is performed at an early stage by the tokeniser , before dictionary lookup .","label":5,"label_text":"OWN"} +{"text":"use another semantic information as well as thesauruses , such as selectional restriction","label":5,"label_text":"OWN"} +{"text":"First of all , since we don't need products to obtain our results and since they only complicate matters , we eliminate products from consideration in the sequel .","label":5,"label_text":"OWN"} +{"text":"Several follow-up experiments were used to confirm the results : using corpora from the Penn treebank , using equivalence classes to ensure that all lexical entries have a total relative frequency of at least 0.01 , and using larger corpora .","label":5,"label_text":"OWN"} +{"text":"Collaborative negotiation differs from non-collaborative negotiation and argumentation mainly in the attitude of the participants , since collaborative agents are not self-centered , but act in a way as to benefit the agents as a group .","label":4,"label_text":"BKG"} +{"text":"Since items of type phrase are never introduced at that type , but only in the form of sub-types , there are no transitions from phrase in the corpus .","label":5,"label_text":"OWN"} +{"text":"This would result in misleading probabilities for the X-bar schemata since the use of schemata,, andwould immediately bring down the probability of a parse compared to a parse of the same string which happened to use onlyand.","label":5,"label_text":"OWN"} +{"text":"Efficient Compilation of HPSG Grammars '' of the Sonderforschungsbereich 340 of the Deutsche Forschungsgemeinschaft .","label":5,"label_text":"OWN"} +{"text":"applies to stem morphemes reading three boundary symbols simultaneously ; this marks the end of a stem .","label":5,"label_text":"OWN"} +{"text":"Since we can use existing methods to perform disambiguation for the rest of the data , we can improve the disambiguation accuracy for the entire test data using this knowledge .","label":5,"label_text":"OWN"} +{"text":"FollowingWalker 1992's weakest link assumption the strength of the evidence is the weaker of the strength of the belief and the strength of the evidential relationship .","label":3,"label_text":"BAS"} +{"text":"Questions can mention concepts such as places , times , dates , fares , meals , airlines , plane types and ground transportation , but most utterances mention several of these , and there are few obvious restrictions on which of them can occur in the same utterance .","label":6,"label_text":"OTH"} +{"text":"Correctly determining number is a difficult problem when translating from Japanese to English .","label":4,"label_text":"BKG"} +{"text":"We denote this language by L(G) .","label":5,"label_text":"OWN"} +{"text":"Given that the transition is a shift , there seem to be more and less coherent ways to shift .","label":5,"label_text":"OWN"} +{"text":"State-Application and State-Prediction together provide the basis of a sound and complete parser .","label":5,"label_text":"OWN"} +{"text":"The \/\/ - values of the new subgoals, ... ,are the disjoint set unionswhereis the \/ - value ofin the local tree given in the grammar .","label":5,"label_text":"OWN"} +{"text":"In non-incremental generation , this corresponds to the fact that the input lacks necessary information , because the entire input is assumed to be given at one time ( e.g. , the undecidable number value of the example described in section) .","label":5,"label_text":"OWN"} +{"text":"This last step is made because the definitive classes must be mutually disjoint .","label":5,"label_text":"OWN"} +{"text":"The model-ordering relationestablishesas the optimistic model for the theory because it contains as much information asand is easier to defeat .","label":5,"label_text":"OWN"} +{"text":"We operate with a range where the lower bound gives at least the desired coverage , but where the higher bound doesn't .","label":5,"label_text":"OWN"} +{"text":"Moreover , they can utilise more of the implicit information in the training data by modelling negative relationships .","label":5,"label_text":"OWN"} +{"text":"The typing of a collocation with such a function opens up the way to a treatment of collocations inside a given language module and hence to a substantial reduction in the number of collocations explicitly handled in the multilingual transfer dictionary .","label":5,"label_text":"OWN"} +{"text":"Consider , however , that instead of generating all the possible temporal \/ rhetorical structures , we could use the information available to fill in the most restrictive type possible in the type hierarchy of temporal \/ rhetorical relations shown in Figure.","label":5,"label_text":"OWN"} +{"text":"The problem with using results obtained from the implementation given here is that the grammar is sufficiently underspecified and so leaves too great a task for the probabilistic information .","label":5,"label_text":"OWN"} +{"text":"forall and exists are encoded similarly to abstraction , in that they take a functional argument and so object-level binding of variables by quantifiers is handled by meta-level- abstraction .","label":5,"label_text":"OWN"} +{"text":"We study the computational complexity of the parsing problem of a variant of Lambek Categorial Grammar that we call semidirectional .","label":2,"label_text":"AIM"} +{"text":"The process continues until the top-level proposed beliefs are evaluated .","label":5,"label_text":"OWN"} +{"text":"This can be seen in the following portion of a telephone conversation recorded byPsathas 1991, p. 196 .","label":4,"label_text":"BKG"} +{"text":"Index 1 ( I1 ) is the index associated with the non-unit clause , Index 2 ( I2 ) is associated with the unit clause , andis the result of combining the indices .","label":5,"label_text":"OWN"} +{"text":"The right tree defines the relation among the semantic representation of the root and the semantic representations of the leaves .","label":4,"label_text":"BKG"} +{"text":"An utterance hypothesis encountered at run time is then treated as if it had been selected from the subpopulation of sentences represented by one of these subcorpora .","label":5,"label_text":"OWN"} +{"text":"Words not found in the lexicon are analysed by a separate finite-state transducer , the guesser .","label":5,"label_text":"OWN"} +{"text":"The heuristic is successful in labeling 21 of the 25 bad parses as `` bad '' .","label":5,"label_text":"OWN"} +{"text":"Our solution for the projection problem does not differ from a solution for individual utterances .","label":5,"label_text":"OWN"} +{"text":"Let 's assume that the latter relation is stated by pairs of trees .","label":4,"label_text":"BKG"} +{"text":"The ` n ' in the top DRS is a mnemonic for ` now ' - the utterance time .","label":6,"label_text":"OTH"} +{"text":"Notice that similarity is a more specialized notion than association or relatedness : doctors and sickness may be highly associated , but one would not judge them to be particularly similar .","label":5,"label_text":"OWN"} +{"text":"For example , a question about a word is represented as 30 binary questions .","label":5,"label_text":"OWN"} +{"text":"As an example , consider the following discoursePartee 1984.","label":6,"label_text":"OTH"} +{"text":"The parser then selects the parse ranked best ( i.e. the parse of lowest overall score ) .","label":5,"label_text":"OWN"} +{"text":"Next we talk about the semantics of garu .","label":5,"label_text":"OWN"} +{"text":"This can be verified on the basis of a sample lexical entry for a main verb .","label":5,"label_text":"OWN"} +{"text":"For example , if a rule was detected to work just twice and the total number of observations was also two , its estimateis very high ( 1 , or 0.83 for the smoothed version ) but clearly this is not a very reliable estimate because of the tiny size of the sample .","label":5,"label_text":"OWN"} +{"text":"However , there are many pairs or triples of clusters that should be collapsed into one on linguistic grounds .","label":5,"label_text":"OWN"} +{"text":"Moreover , the essentially model theoretic slant on specification we propose here seems particularly well suited to this aim .","label":5,"label_text":"OWN"} +{"text":"The normaliser ensures that all parameters for a head noun sum to unity .","label":5,"label_text":"OWN"} +{"text":"In order to prove this it must be possible to test the hypothesis that it is only important propositions that get repeated , paraphrased or made explicit .","label":5,"label_text":"OWN"} +{"text":"Thus , a collaborative dialogue is modeled in terms of the evolution of the referring plan .","label":6,"label_text":"OTH"} +{"text":"On the other hand , if the evaluation indicates that the agent should maintain her original belief , she should attempt to provide sufficient justification to convince the other agent to adopt this belief if the belief is relevant to the task at hand .","label":4,"label_text":"BKG"} +{"text":"as well as in sentential complements , such as","label":5,"label_text":"OWN"} +{"text":"A partition breaks an obligatory rule if the surface target does not match but everything else , including the feature specification , does .","label":5,"label_text":"OWN"} +{"text":"The parameter which distinguishes the two approaches is partial vs. total downstep .","label":5,"label_text":"OWN"} +{"text":"Activation of connections between referentially related imagens and logogens is called referential processing .","label":6,"label_text":"OTH"} +{"text":"The formalism , which we refer to as Partially Linear PATR ( PLPATR ) manipulates feature structures rather than stacks .","label":5,"label_text":"OWN"} +{"text":"One discourse participant uses an anaphor to summarize a plan , but when the other participant evaluates this plan there may be a control shift and any reference to the plan will necessarily cross a control boundary .","label":5,"label_text":"OWN"} +{"text":"The left context vector of the following word .","label":5,"label_text":"OWN"} +{"text":"RAND-PHONO inserted random segmentation points where permitted by the phonotactic constraints .","label":5,"label_text":"OWN"} +{"text":"We consider one case briefly .","label":5,"label_text":"OWN"} +{"text":"As a result , zero anaphora resolution of complex sentence is not only to be done syntactically , but also to be done pragmatically and \/ or semantically .","label":5,"label_text":"OWN"} +{"text":"But can a decision-tree model be represented by an n-gram model ?","label":6,"label_text":"OTH"} +{"text":"This distribution indicates that some control segments are hierarchically related to others .","label":5,"label_text":"OWN"} +{"text":"We generalize the indexing scheme from chart parsing in order to allow different operations for the combination of strings .","label":5,"label_text":"OWN"} +{"text":"A colon `` : '' separates multiple categories assigned to a word .","label":5,"label_text":"OWN"} +{"text":"We use features of both the candidate parse and the ignored parts of the original input sentence .","label":5,"label_text":"OWN"} +{"text":"For `` seemed '' , left-context neighbors are words that have similar types of noun phrases in subject position ( mainly auxiliaries ) .","label":5,"label_text":"OWN"} +{"text":"To perform this experiment we take one-by-one each rule from the rule-sets produced at the rule extraction phase , take each word token from the corpus and guess its POS - set using the rule if the rule is applicable to the word .","label":5,"label_text":"OWN"} +{"text":"Our language is called and its primitive symbols ( with respect to a given signature) consists of","label":5,"label_text":"OWN"} +{"text":"Yet a computational system has no choice but to consider other , more awkward possibilities -- for example , this cluster might be capturing a distributional relationship between advice ( as one sense of counsel ) and royalty ( as one sense of court ) .","label":1,"label_text":"CTR"} +{"text":"Under different circumstances the derivation of the next-node is also presented in different ways .","label":5,"label_text":"OWN"} +{"text":"For perplexity evaluation , we tuned the similarity model parameters by minimizing perplexity on an additional sample of 57.5 thousand words of WSJ text , drawn from the ARPA HLT development test set .","label":5,"label_text":"OWN"} +{"text":"Any word or phrase in that group that appears in the noun taxonomy for WordNet would be a candidate as a test instance -- for example , line , or secret writing .","label":5,"label_text":"OWN"} +{"text":"The temperature determines the fraction of the search space that is covered by a single perturbation step .","label":6,"label_text":"OTH"} +{"text":"P is a total function fromto the set of partial functions from U to U , and","label":5,"label_text":"OWN"} +{"text":"And , we are now applying it to text segmentationGrosz and Sidner 1986,Youmans 1991, i.e. to capture the shifts of coherent scenes in a story .","label":5,"label_text":"OWN"} +{"text":"Generally , obligation is defined in terms of a modal operator often called permissible .","label":5,"label_text":"OWN"} +{"text":"Quantification of generalization level appropriateness","label":5,"label_text":"OWN"} +{"text":"We give an informal argument for this .","label":5,"label_text":"OWN"} +{"text":"Hyphens used to emphasise a word , e.g. har-mo-ni-ser , also leave endings unaltered .","label":5,"label_text":"OWN"} +{"text":"Indeed , if the non-terminals are viewed as atomic categories then there is no way this can be done .","label":5,"label_text":"OWN"} +{"text":"Instead of feature based syntax trees and first-order logical forms we will adopt a simpler , monostratal representation that is more closely related to those found in dependency grammarsHudson 1984.","label":5,"label_text":"OWN"} +{"text":"They cannot take GEN ` a ' because they cannot be modified by a .","label":5,"label_text":"OWN"} +{"text":"In the general case , the off-line variant ( in which all unifications are deferred until the complete CF parse forest has been constructed ) is not guaranteed to terminate ; indeed , it usually does not do so with the ANLT grammar .","label":6,"label_text":"OTH"} +{"text":"Continuing is preferred over retaining which is preferred over shifting .","label":6,"label_text":"OTH"} +{"text":"In a recent field trip to Western Cameroon to study the Bamileke Dschang noun associative construction , I was able to collect a small amount of data relating to Fscaling throughout a particular informant 's pitch range .","label":5,"label_text":"OWN"} +{"text":"Whereas the segmentation point data are inconclusive , word type data demonstrate that combining information sources is more useful than using distributional information alone .","label":5,"label_text":"OWN"} +{"text":"For partial downstep , we havewhile for total downstep we have.","label":5,"label_text":"OWN"} +{"text":"One obvious solution to this problem would be to extend distributional grouping methods to word senses .","label":4,"label_text":"BKG"} +{"text":"These are cases of ` zeugma ' and are unacceptable except as jokes .","label":4,"label_text":"BKG"} +{"text":"We also have not needed to make a decision on how to score an algorithm that only finds one interpretation for an utterance that humans find ambiguous .","label":5,"label_text":"OWN"} +{"text":"This paper is organized as follows : We first review a general algorithm for least-errors recognition .","label":0,"label_text":"TXT"} +{"text":"Of course , I do not expect that this technique using flat 1-unit costs will prove superior to all methods that are more sensitive to phonetic details .","label":5,"label_text":"OWN"} +{"text":"We conducted an empirical analysis into the relation between control and discourse structure .","label":2,"label_text":"AIM"} +{"text":"However , this would lead to many times more patterns being produced than are really necessary .","label":5,"label_text":"OWN"} +{"text":"gives two boundary rules :is used for non-stem morphemes , e.g. prefixes and suffixes .","label":5,"label_text":"OWN"} +{"text":"Categories are sets of feature value equations containing syntactic information relevant to determining how uninstantiated meta-variables can be resolved .","label":5,"label_text":"OWN"} +{"text":"Notice thatdoes not need to be symmetric for this derivation , as the two distributions are simply related by Bayes 's rule .","label":5,"label_text":"OWN"} +{"text":"The question addressed in this paper concerns how we should relate pitch contours to tone sequences .","label":2,"label_text":"AIM"} +{"text":"Tableshows the number of categories at all levels .","label":5,"label_text":"OWN"} +{"text":"In case of generation , this means that the user annotates the path specifying the logical form , i.e. , the path ( or some of its subpaths ) , as bound .","label":5,"label_text":"OWN"} +{"text":"InShieber et al. 1990an ad-hoc solution was proposed to enforce termination when the semantic head has been moved .","label":6,"label_text":"OTH"} +{"text":"Given a model M and data S , its total description length L ( M ) is computed as the sum of the model description length, the description length of its parameters, and the data description length.","label":5,"label_text":"OWN"} +{"text":"Analysing each type of control shift , it is clear that there are differences between the cues used for the topic shift and the no shift cases .","label":5,"label_text":"OWN"} +{"text":"InKehler 1994c, we showed how this architecture also accounted for the facts thatLevin and Princenoted about gapping .","label":6,"label_text":"OTH"} +{"text":"A word vector is defined as the list of distances from a word to a certain set of selected words , which we call origins .","label":6,"label_text":"OTH"} +{"text":"We could argue that in cases of compound formation , exactly the same process is to be accounted for , since the compound embodies both the concept mediated by the LF and its argument lexeme .","label":5,"label_text":"OWN"} +{"text":"Finally note that by another application of Bayes rule we can replace the two factorsbywithout changing other parts of the model .","label":5,"label_text":"OWN"} +{"text":"The key design decision is to compose morphophonological and morphosyntactic information , but not the lexicon , when compiling the description .","label":5,"label_text":"OWN"} +{"text":"In what follows , I explain some of the parameters of annealing search as used in the current implementation .","label":0,"label_text":"TXT"} +{"text":"FollowingScha and Polanyi 1988andPruest et al. 1994, our model of discourse consists of units called Discourse Constituent Units ( DCUs ) which are related by various temporal and rhetorical relations .","label":3,"label_text":"BAS"} +{"text":"We define the transition relation using the relation trans\/3 .","label":5,"label_text":"OWN"} +{"text":"The embedding conditions determine , that this reference time be universally quantified over , causing an erroneous reading in which for each event ,, of John 's calling , for each earlier time, he lights up a cigarette .","label":6,"label_text":"OTH"} +{"text":"The approach described here is quite different .","label":1,"label_text":"CTR"} +{"text":"Brown et al. 1992suggest a class-based n-gram model in which words with similar cooccurrence distributions are clustered in word classes .","label":6,"label_text":"OTH"} +{"text":"It makes the association score prefer incorrect classes and jump on over-generalizations .","label":1,"label_text":"CTR"} +{"text":"This preference was designed to handle the phenomena of false starts , which is common in spontaneous speech .","label":5,"label_text":"OWN"} +{"text":"Example .","label":4,"label_text":"BKG"} +{"text":"The other difference is the use of left and right as models of the dominance relationships between nodes .","label":5,"label_text":"OWN"} +{"text":"The quantitative model is in a much better position to cope with these problems .","label":5,"label_text":"OWN"} +{"text":"A better measure is the proportion of ambiguous words which are given the correct tag , where by ambiguous we mean that more than one tag was hypothesised .","label":1,"label_text":"CTR"} +{"text":"In Tables-, we summarize our results .","label":5,"label_text":"OWN"} +{"text":"It would also permit measuring the utility of the SRs obtained using WordNet in comparison with other frameworks using other kinds of knowledge .","label":5,"label_text":"OWN"} +{"text":"Dotted lines are used to represent when 'normal ' facts are combined with magic facts to derive new magic facts .","label":6,"label_text":"OTH"} +{"text":"There is however no reason to keep this rule in the magic-compiled grammar .","label":5,"label_text":"OWN"} +{"text":"Sectionraises some open questions concerning the determination of parallelism between ellipsis and antecedent , and other issues .","label":0,"label_text":"TXT"} +{"text":"Disambiguation is performed with respect to WordNet senses , which are fairly fine-grained ; however , the method also permits the assignment of higher-level WordNet categories rather than sense labels .","label":5,"label_text":"OWN"} +{"text":"Although both algorithms are part of theories of discourse that posit the interaction of the algorithm with an inference or intentional component , we will not use reasoning in tandem with the algorithm 's operation .","label":5,"label_text":"OWN"} +{"text":"In the case where lexical entries have been associated with preference information , this information can be exploited to guide the heuristic search .","label":5,"label_text":"OWN"} +{"text":"The magnitude of the scores is ignored by our qualitative language processor ; it simply processes the hypotheses one at a time until it finds one for which it can produce a complete logical form interpretation that passes grammatical and interpretation constraints , at which point it discards the remaining hypotheses .","label":5,"label_text":"OWN"} +{"text":"This distinction can be justified monolingually for the other languages that we treat ( English , French , and Japanese ) .","label":5,"label_text":"OWN"} +{"text":"Fromthe world knowledge of the system would be reinforced by the two stereotypical transitions :","label":5,"label_text":"OWN"} +{"text":"We do not have a fully satisfactory solution to this problem .","label":5,"label_text":"OWN"} +{"text":"The need to figure out such systems as the comparative phonology of various linguistic sites can be very time-consuming and fraught with arbitrary choices .","label":1,"label_text":"CTR"} +{"text":"Clearly , this construct is closely related to theKasperRoundspath equalityKasper and Rounds 1990; the principle difference is that whereas theKasperRoundsenforces path equalities within the domain of feature structures , the LFG path equality enforces equalities between the tree domain and the feature structure domain .","label":5,"label_text":"OWN"} +{"text":"The formalism as currently described can be used to simulate arbitrary Turing Machine computations .","label":5,"label_text":"OWN"} +{"text":"for any c , which we substitute intoto obtain","label":5,"label_text":"OWN"} +{"text":"A common mistake is to place the cursor one extra position to the left when entering diacritics .","label":4,"label_text":"BKG"} +{"text":"If T is a partial phrase marker , and T ' is a partial phrase marker which differs from it only in that a single non-terminal nodein T has been expanded toin T ' , then.","label":6,"label_text":"OTH"} +{"text":"This is basically the entropy used inQuinlan 1986.","label":5,"label_text":"OWN"} +{"text":"Propose as the antecedent any NP node encountered that has an NP or S node on the path from it to X .","label":6,"label_text":"OTH"} +{"text":"When the system does not have the turn , the conversational state will still be updated , but the actor will not try to deliberate or act .","label":5,"label_text":"OWN"} +{"text":"InWood's SYSCONJ system , the parser can back up to various points in the history of the parse , and parse the second conjunct according to the configuration found .","label":6,"label_text":"OTH"} +{"text":"For each rule r in P with head , say , p () , and for each literalin its body , add a magic rule to.","label":6,"label_text":"OTH"} +{"text":"However , for many tasks , one is interested in relationships among word senses , not words .","label":1,"label_text":"CTR"} +{"text":"Finally a partial implementation of these ideas is presented , along with some preliminary results from testing on a small set of sentences .","label":5,"label_text":"OWN"} +{"text":"Each proper branch is a binary branching structure , and so all grammatical constraints will need to be encoded locally .","label":6,"label_text":"OTH"} +{"text":"whereis the frequency ofinand=, and find the value of this quantity for all possible merged clusters .","label":5,"label_text":"OWN"} +{"text":"Because a path through low-frequency words ( rare words ) implies a strong relation , it should be measured as a shorter path .","label":5,"label_text":"OWN"} +{"text":"They also included paraphrases , in which the speaker reformulated or repeated part or all of what had just been said .","label":5,"label_text":"OWN"} +{"text":"Sectionprovides a general description for defaults in generation emphasizing the specific requirements in an incremental system .","label":0,"label_text":"TXT"} +{"text":"First , the use of conceptual association not only enables a broad coverage , but also improves the accuracy .","label":5,"label_text":"OWN"} +{"text":"It does not care about natural language properties : it only considers texts as streams of characters .","label":1,"label_text":"CTR"} +{"text":"Each new gene is created by mating the best of three randomly chosen genes with the best of three other randomly chosen genes .","label":6,"label_text":"OTH"} +{"text":"We tested a TNCB-based generator in the SLEMaT MT system with the pathological cases described inBrew 1992againstWhitelock's original generation algorithm , and have obtained speed improvements of several orders of magnitude .","label":1,"label_text":"CTR"} +{"text":"It remains to be seen how bottom-up Earley deduction compares with ( and can be combined with ) the improved top-down Earley deduction ofDoerre 1993,Johnson 1993andNeumann forthcoming, and to head-driven methods with well-formed substring tablesBouma and van Noord 1993, and which methods are best suited for which kinds of problems ( e.g. parsing , generation , noisy input , incremental processing etc . )","label":5,"label_text":"OWN"} +{"text":"Her example which we reproduce in figurecan also be accounted for using the continuing \/ retaining distinction .","label":5,"label_text":"OWN"} +{"text":"Generally , grounding is considered less urgent than acting based on communicative intentions , although some grounding acts will be performed on the basis of obligations which arise while interpreting prior utterances .","label":5,"label_text":"OWN"} +{"text":"Sectionbelow shows the result of making these two additions to the method .","label":0,"label_text":"TXT"} +{"text":"It is well known that a simple tabulation of frequencies of certain words participating in certain configurations , for example of frequencies of pairs of a transitive main verb and the head noun of its direct object , cannot be reliably used for comparing the likelihoods of different alternative configurations .","label":4,"label_text":"BKG"} +{"text":"In the presence of models with varying complexity , MLE tends to overfit the data , and output a model that is too complex and tailored to fit the specifics of the input data .","label":1,"label_text":"CTR"} +{"text":"In making a reference , an agent uses the most salient attributes of the referent .","label":5,"label_text":"OWN"} +{"text":"Also a sentence doesn't always start with a capitalized letter and finish with a full stop ( especially in emails ) .","label":5,"label_text":"OWN"} +{"text":"It goes on to investigate ways in which a corpus pre-parsed with this formalism may be processed to provide a probabilistic language model for use in the parsing of fresh texts .","label":5,"label_text":"OWN"} +{"text":"We have a lexical transducer for FrenchKarttunen et al. 1992which was built using Xerox Lexical ToolsKarttunen and Beesley 1992,Karttunen 1993.","label":3,"label_text":"BAS"} +{"text":"Firstly , there is nothing equivalent to a stack mechanism : at all times the state is characterised by a single syntactic type , and a single semantic value , not by some stack of semantic values or syntax trees which are waiting to be connected together .","label":5,"label_text":"OWN"} +{"text":"The tree-cutting criteria can be local ( `` The LHS of the original grammar rule is an NP , '' ) or dependent on the rest of the parse tree ( `` that doesn't dominate the empty string only , '' ) and previous choices of nodes to cut at ( `` and there is no cut above the current node that is also labelled NP. '' ) .","label":6,"label_text":"OTH"} +{"text":"This FS is the result of the unification between the FSs of subordinate clause and main clause , where the contents of syntactic feature HEAD , namelyis omitted .","label":5,"label_text":"OWN"} +{"text":"The data were recordings of telephone conversations between clients and an expert concerning problems with software .","label":5,"label_text":"OWN"} +{"text":"The results suggest that a completely unconstrained initial model does not produce good quality results , and that one accurately trained from a hand-tagged corpus will generally do better than using an approach based on re-estimation , even when the training comes from a different source .","label":5,"label_text":"OWN"} +{"text":"These results are consistent with other findings for such models , suggesting that the existence or otherwise of an improvement brought about by clustering is indeed a good pointer to whether it is worth developing further the unclustered model .","label":5,"label_text":"OWN"} +{"text":"That is , we are looking for a predicate that when applied to the subject term of the ellipsis antecedent returns the antecedent .","label":5,"label_text":"OWN"} +{"text":"The node X is the highest S node in, so we go to the previous sentence.","label":6,"label_text":"OTH"} +{"text":"As noted inPartee 1984, this analysis does not extend in a straightforward manner to cases in which the operator when is replaced by ( an unrestricted ) before or after , in such quantified contexts .","label":1,"label_text":"CTR"} +{"text":"The nested loop performs the task of reaching thermal equilibrium at each temperature .","label":6,"label_text":"OTH"} +{"text":"Two could have been avoided by other means , i.e. they belong to other classes of tagging errors .","label":5,"label_text":"OWN"} +{"text":"Therefore , we will measure the similarity between two words with respect to their syntactic behavior to , say , their left side by the degree to which they share the same neighbors on the left .","label":5,"label_text":"OWN"} +{"text":"In this latter case , the system will reject the user 's proposal and present or argue for its own proposal .","label":5,"label_text":"OWN"} +{"text":"What these approaches have in common is that they classify words instead of individual occurrences .","label":1,"label_text":"CTR"} +{"text":"The results of training are essentially identical to those given earlier , with the optimal assignment being as shown in figure.","label":5,"label_text":"OWN"} +{"text":"There was some limited evidence that some clusterings for the same condition were significantly better than others , rather than just happening to perform better on the particular test data used .","label":5,"label_text":"OWN"} +{"text":"To show usefulness of the robust parser proposed in this paper , we made some experiments .","label":5,"label_text":"OWN"} +{"text":"Applicability Condition : a predicate .","label":5,"label_text":"OWN"} +{"text":"A candidate foci tree contains the pieces of evidence in a proposed belief tree which , if disbelieved by the user , might change the user 's view of the unaccepted top-level proposed belief ( the root node of that belief tree ) .","label":5,"label_text":"OWN"} +{"text":"For instance , distinguishing between different types of fiction by formal or stylistic criteria of this kind may just be something we should not attempt : the fiction types are naturally defined in terms of their content , after all .","label":5,"label_text":"OWN"} +{"text":"His notion of similarity seems to agree with our intuitions in many cases , but it is not clear how it can be used directly to construct word classes and corresponding models of association .","label":1,"label_text":"CTR"} +{"text":"Once a given rule or set of rules is stable , and the writer is satisfied with the performance of that part of the grammar , a local threshold value of 1 may be assigned so that superfluous parses will not interfere with work elsewhere .","label":5,"label_text":"OWN"} +{"text":"In the simplest scheme for calculating the entropy of an or-node , only the RHS phrase of the parent rule , i.e. the dominating and-node , contributes to the entropy , and there is in fact no need to employ an and-or tree at all , since the tree-cutting criterion becomes local to the parse tree being cut up .","label":5,"label_text":"OWN"} +{"text":"Thus , if walked ' has type, thenis a-Prolog ( meta-level ) function with type, andis the object-level representation , with type tm .","label":5,"label_text":"OWN"} +{"text":"By the same token , however , PLIG may only generate structural descriptions in which dependent branches begin at nodes that are siblings of one another .","label":5,"label_text":"OWN"} +{"text":"Corresponding to the first right-hand side literal in the original rule stepderives the following magic rule :","label":6,"label_text":"OTH"} +{"text":"So , these words can be use as discriminant of language .","label":5,"label_text":"OWN"} +{"text":"In order to do so , however , we need knowledge to judge which subtree represents the semantic features of the tree .","label":5,"label_text":"OWN"} +{"text":"For example the logical omniscience assumption would mean that ifandbelow are in the context , thenwill be as well since it is entailed fromand.","label":5,"label_text":"OWN"} +{"text":"SeeHuang 1994bfor more details .","label":5,"label_text":"OWN"} +{"text":"In,'s are the records of unification , that contain the costs and the original types ; they become necessary when they are backtracked , and in that meaning , those bindings are transitive .","label":5,"label_text":"OWN"} +{"text":"This completes our mathematical picture of LFG ontology .","label":5,"label_text":"OWN"} +{"text":"We also believe that there are other levels of structure in discourse that are not captured by the control rules , e.g. control shifts do not always correspond with task boundaries .","label":5,"label_text":"OWN"} +{"text":"This in turn makes it relatively easy to provide proofs of soundness and completeness for an incremental parsing algorithm .","label":5,"label_text":"OWN"} +{"text":"In order to avoid this , we assume the unification cost .","label":5,"label_text":"OWN"} +{"text":"Such filtering reduces the rule-sets more than tenfold and does not leave clearly coincidental cases among the rules .","label":5,"label_text":"OWN"} +{"text":"The weights on the connections between input and output nodes are adjusted until a required level of performance is reached .","label":5,"label_text":"OWN"} +{"text":"Conjunction :","label":5,"label_text":"OWN"} +{"text":"96 % accuracy correct assignment of tags to word token , compared with a human annotator , is quoted , over a 500000 word corpus .","label":6,"label_text":"OTH"} +{"text":"The results from trial 1 deserve special attention .","label":5,"label_text":"OWN"} +{"text":"The algorithm employs a greedy heuristic search within aBayes-ian framework , and a post-pass using the Inside-Outside algorithm .","label":5,"label_text":"OWN"} +{"text":"The behaviour of LHIP can best be understood in terms of the notions of island , span , cover and threshold :","label":5,"label_text":"OWN"} +{"text":"In other two-person dialogues , both parties may contribute discourse entities to the conversation on a more equal basis .","label":5,"label_text":"OWN"} +{"text":"This states the syntactic tense , aspect and polarity marked on the ellipsis ( underscores indicate lack of specification ) .","label":5,"label_text":"OWN"} +{"text":"In order to construct interpretations of the kinds of objects we are interested in , then , we have to start by looking at simple sentences .","label":5,"label_text":"OWN"} +{"text":"In figure, the TNCB composed of nodes 1 , 2 , and 3 is inserted inside the TNCB composed of nodes 4 , 5 and 6 .","label":5,"label_text":"OWN"} +{"text":"The simplest PCA is the operator Derive .","label":5,"label_text":"OWN"} +{"text":"Comparisons are made across five dimensions :","label":5,"label_text":"OWN"} +{"text":"The instantiation ( prediction ) rule of top-down Earley deduction is not needed in bottom-up Earley deduction , because there is no prediction .","label":5,"label_text":"OWN"} +{"text":"Whether the constraints will ultimately require monolingual grammars to be enriched with entirely unmotivated features will only become clear as translation coverage is extended and new language pairs are added .","label":5,"label_text":"OWN"} +{"text":"However , it is not clear how well the directed methods are applicable to grammars which do not depend on concatenation and have no unique ` left corner ' which should be connected to the start symbol .","label":1,"label_text":"CTR"} +{"text":"Agents are logically omniscient .","label":6,"label_text":"OTH"} +{"text":"where the iterations are over each utterancein the corpus , each clusterfrom whichmight arise , and each wordin utterance.","label":5,"label_text":"OWN"} +{"text":"The description will include the information that the verb knows precedes NP, and that the VP dominates NP.","label":6,"label_text":"OTH"} +{"text":"This is conventionally a `` single layer '' net , since there is one layer of processing nodes .","label":5,"label_text":"OWN"} +{"text":"Our raw knowledge about the relation consists of the frequenciesof occurrence of particular pairs (v,n) in the required configuration in a training corpus .","label":5,"label_text":"OWN"} +{"text":"This example is characterized by its multiple ambiguous pronouns and by the fact that the final utterance achieves a shift ( see figure) .","label":4,"label_text":"BKG"} +{"text":"Reset activity of all nodes in Paradigme .","label":5,"label_text":"OWN"} +{"text":"It is therefore appropriate to filter translation hypotheses by rescoring according to the version of the overall statistical model that included the factorsso that the target language model constrains the output of the translation model .","label":5,"label_text":"OWN"} +{"text":"Notice that this natural-deduction-style proof in the type logic corresponds very closely to the phrase-structure tree one would like to adopt in an analysis with traces .","label":4,"label_text":"BKG"} +{"text":"Examples of such cooccurrences include relationships between head words in syntactic constructions ( verb-object or adjective-noun , for example ) and word sequences ( n-grams ) .","label":4,"label_text":"BKG"} +{"text":"However , a related and crucial issue is which linguistic tasks are used as a reference .","label":5,"label_text":"OWN"} +{"text":"As a strategy of statistical estimation MDL is guaranteed to be near optimal .","label":6,"label_text":"OTH"} +{"text":"The Core Language Engine ( CLE ) is an application independent , unification based `` general purpose device for mapping between natural language sentences and logical form representationsAlshawi 1992.","label":6,"label_text":"OTH"} +{"text":"In an envisioned application , a user will employ a cascade of filters starting with filtering by topic , and continuing with filters by genre or text type , and ending by filters for text quality , or other tentative finer-grained qualifications .","label":5,"label_text":"OWN"} +{"text":"There are often situations where the criteria to guide the search are available only for the base cases , for example","label":5,"label_text":"OWN"} +{"text":"We will now describe how the description length for a model is calculated .","label":0,"label_text":"TXT"} +{"text":"An argument can be made that the approach taken here relies on a formalism that entails implementation issues that are more difficult than for the other solutions and inherently not as efficient .","label":5,"label_text":"OWN"} +{"text":"However , they may well be constituents in other grammars .","label":5,"label_text":"OWN"} +{"text":"Better , perhaps , would be to draw on what is good in LFG and to explore the logical options that arise naturally when the model theoretic view is taken as primary .","label":5,"label_text":"OWN"} +{"text":"hyponymy ;","label":5,"label_text":"OWN"} +{"text":"The utterances inconstitute a possible answer that Jane may give to Mary in order to clarify the problem .","label":5,"label_text":"OWN"} +{"text":"Studying quantities of texts , we try to understand as well as possible ways to discriminate languages .","label":2,"label_text":"AIM"} +{"text":"As long as readability indexing schemes are used in descriptive applications they work well to discriminate between text types .","label":5,"label_text":"OWN"} +{"text":"The first phenomenon we noted was that the anaphora distribution indicated that some segments are hierarchically related to others .","label":5,"label_text":"OWN"} +{"text":"Hence , Mark saw them would receive a valid analysis , as would Mary and Mark saw them , provided that the grammar contains a rule for conjoined NPs ; John saw them , on the other hand , would not .","label":5,"label_text":"OWN"} +{"text":"Given a lexicon , the sample can be encoded by replacing words with their respective indices into the lexicon :","label":5,"label_text":"OWN"} +{"text":"This yields a binary tree .","label":6,"label_text":"OTH"} +{"text":"We enter the rewrite phase , then , with an ill-formed TNCB. Each move operation must improve it .","label":5,"label_text":"OWN"} +{"text":"In it , NP denotes NOUN PHRASE and S denotes SENTENCE .","label":6,"label_text":"OTH"} +{"text":"The location le jardin-the garden is the final location of the motion .","label":5,"label_text":"OWN"} +{"text":"The class guesser , like the lexicon , is a finite state transducer .","label":5,"label_text":"OWN"} +{"text":"We compared the performance of employing MDL as a criterion in our simulated annealing algorithm , against that of employing MLE by simulation experiments .","label":4,"label_text":"BKG"} +{"text":"Finally , relative entropy is a natural measure of similarity between distributions for clustering because its minimization leads to cluster centroids that are a simple weighted average of member distributions .","label":5,"label_text":"OWN"} +{"text":"For example , since theBrennan et al.salience hierarchy for discourse entities is based on grammatical relation , an implicit assumption is that an utterance only has one subject .","label":6,"label_text":"OTH"} +{"text":"These alignment relations are similar in some respects to the alignments used byBrown et al. 1990in their surface translation model .","label":3,"label_text":"BAS"} +{"text":"We can write the following formula , whereifandif.","label":5,"label_text":"OWN"} +{"text":"It turns out that this will also be the values ofthat minimize the average distortion between the asymmetric cluster model and the data .","label":5,"label_text":"OWN"} +{"text":"`` To prove Formula , let us consider the two cases by assuming A and B. ''","label":5,"label_text":"OWN"} +{"text":"Reliability :","label":5,"label_text":"OWN"} +{"text":"Recall that each model is specified by the Cartesian product of a noun partition and a verb partition , and a number of parameters for them .","label":5,"label_text":"OWN"} +{"text":"And to date , grammar checkers and other programs which deal with ill-formed input usually step directly from spelling considerations to a full-scale parse , assuming a complete sentence .","label":6,"label_text":"OTH"} +{"text":"This captures the idea inGrosz et al. 1986that subjecthood contributes strongly to the priority of an item on the Cf list .","label":6,"label_text":"OTH"} +{"text":"Categorization sentences with linguistic properties shows that difficult problems have sometimes simple solutions .","label":5,"label_text":"OWN"} +{"text":"We have proposed a method of clustering words based on large corpus data .","label":2,"label_text":"AIM"} +{"text":"Moreover , as noted byZeevat 1990, the use of any kind of ordered threading will tend to fail for Bach-Peters sentences , such as :","label":6,"label_text":"OTH"} +{"text":"We continue in this fashion , updating the reference time , until the second sentence in the discourse is processed .","label":6,"label_text":"OTH"} +{"text":"The proposal ofLiberman and Sproat 1992is more sophisticated and allows for the frequency of the words in the compound .","label":6,"label_text":"OTH"} +{"text":"The system imposes a penalty on the left node for the RightNucleus relation , and also on the right node for the LeftNucleus relation .","label":5,"label_text":"OWN"} +{"text":"do away with subjectivity inherent in a hand made thesaurus , and","label":4,"label_text":"BKG"} +{"text":"The anaphora procedure skips the resolution of a given anaphor when this anaphor is preceded by an unattached preposition .","label":5,"label_text":"OWN"} +{"text":"This cluster was one presented byBrown et al.as a randomly-selected class , rather than one hand-picked for its coherence .","label":6,"label_text":"OTH"} +{"text":"Cutting up the parse trees will involve selecting a set of or-nodes in the and-or tree .","label":5,"label_text":"OWN"} +{"text":"CLE partial parsing , using left-corner analysis combined with top-down prediction on the results of the phrasal phase , looks for complete phrases and breaks down a wordstring into maximal segments .","label":5,"label_text":"OWN"} +{"text":"However , in a concurrent model , this kind of strict order is a hindrance and contingent conversions are required .","label":1,"label_text":"CTR"} +{"text":"These results provide support for the control rules .","label":5,"label_text":"OWN"} +{"text":"For some examples , particularly wherecontains a single pronoun andis a retention , some informants seem to have a preference for shifting , whereas the centering algorithm chooses a continuation ( see figure) .","label":5,"label_text":"OWN"} +{"text":"Furthermore , it lends itself readily to an extension for the intensional verb case that has advantages over the widely-assumed account ofPartee and Rooth 1983.","label":1,"label_text":"CTR"} +{"text":"choosing criteria to cluster texts of the same genre , with predictable precision and recall .","label":4,"label_text":"BKG"} +{"text":"For instance , the noun phrase , `` a brown cow , '' consists of an edge extending to the right from `` a , '' an edge extending to the left from `` cow , '' and an edge extending straight up from `` brown '' .","label":5,"label_text":"OWN"} +{"text":"harry would get type-raised by the raise clause to produce, and then composed with found , with the result shown in the following query :","label":5,"label_text":"OWN"} +{"text":"For the maximum likelihood argument , we start by estimating the likelihood of the sequence S of N independent observations of pairs.","label":5,"label_text":"OWN"} +{"text":"Note that Fig.represents not only the hierarchical structure but also the word order of a complex sentence in Japanese .","label":6,"label_text":"OTH"} +{"text":"InClark and Wilkes-Gibbs's experiments , for example , it is one of the tangram figures .","label":6,"label_text":"OTH"} +{"text":"Consider :","label":5,"label_text":"OWN"} +{"text":"Consider the following phrases :","label":4,"label_text":"BKG"} +{"text":"Rather than giving such prominence to the root feature structure , we suggest that the entire derivation tree should be seen as the object that is derived from the input , i.e. , this is what the parser returns .","label":5,"label_text":"OWN"} +{"text":"A lexical functional grammar consists of three main components : a set of context free rules annotated with schemata , a set of well formedness conditions on feature structures , and a lexicon .","label":4,"label_text":"BKG"} +{"text":"The rationale for this is that we wish to cut up the parse trees where we can expect a lot of variation i.e. where it is difficult to predict which rule will be resolved on next .","label":5,"label_text":"OWN"} +{"text":"Finally , we check if the new transcription is better than the best transcription found so far ( BestTrans ) and if so , we set BestTrans to be the new transcription .","label":6,"label_text":"OTH"} +{"text":"While the AWM model is extremely simple ,Landauershowed that it could be parameterized to fit many empirical results on human memory and learningBaddeley 1986.","label":6,"label_text":"OTH"} +{"text":"For each critical value of, we show the relative entropy with respect to the asymmetric model based onof the training set ( set train ) , of randomly selected held-out test set ( set test ) , and of held-out data for a further 1000 nouns that were not clustered ( set new ) .","label":5,"label_text":"OWN"} +{"text":"It has the effect of dividing the evidence from a training instance across all possible categories for the words .","label":5,"label_text":"OWN"} +{"text":"These figures give a positive evaluation of the coverage of WordNet .","label":5,"label_text":"OWN"} +{"text":"Because of the size of the search space , ( roughlywhere | T | is the number of part-of-speech tags , n is the number of words in the sentence , and | N | is the number of non-terminal labels ) , it is not possible to compute the probability of every parse .","label":5,"label_text":"OWN"} +{"text":"Determiner in subject position is ` type-raised ' to :","label":5,"label_text":"OWN"} +{"text":"Firstly , recall that our grammars model a sentence as a sequence of independently generated symbols ; however , in language there is a large dependence between adjacent constituents .","label":5,"label_text":"OWN"} +{"text":"To improve categorization of short sentences , a simple way is the use of the alphabet .","label":5,"label_text":"OWN"} +{"text":"Each layer also contains a bias unit , which is always activated .","label":6,"label_text":"OTH"} +{"text":"Evaluation on NLP has been crucial to fostering research in particular areas .","label":5,"label_text":"OWN"} +{"text":"We thus see how the TNCBs can mirror the dominance information in the source language parse in order to furnish the generator with a good initial guess .","label":5,"label_text":"OWN"} +{"text":"( For instance the scopes of quantifiers or the contextual restrictions on pronouns in the antecedent may not have been resolved ; this will correspond to the presence of uninstantiated meta-variables in the antecedent QLF. )","label":5,"label_text":"OWN"} +{"text":"We denote trees with first order terms .","label":5,"label_text":"OWN"} +{"text":"Lists successful rules , indicating island position and coverage .","label":5,"label_text":"OWN"} +{"text":"Sibun 1990implemented a system generating descriptions for objects with a strong domain structure , such as houses , chips and families .","label":6,"label_text":"OTH"} +{"text":"S \/ he is merely present to execute the actions indicated by the knowledgeable participant .","label":5,"label_text":"OWN"} +{"text":"Our first task is to mathematically model this ontology , and to do so as transparently as possible .","label":5,"label_text":"OWN"} +{"text":"More formally , the probability mass given by q to the set of all samples of length n with relative frequency distribution p is bounded byCover and Thomas 1991.","label":5,"label_text":"OWN"} +{"text":"Magic compilation does not limit the information that can be used for filtering .","label":6,"label_text":"OTH"} +{"text":"The reduction in theoretical complexity is achieved by placing constraints on the power of the target grammar when operating on instantiated signs , and by using a more restrictive data structure than a bag , which we call a target language normalised commutative bracketing ( TNCB ) .","label":5,"label_text":"OWN"} +{"text":"Its novel features are that it treats tense , aspect , temporal adverbials and rhetorical relations as mutually constraining ; it postulates less ambiguity than current temporal structuring algorithms do ; and it uses semantic closeness and other preference techniques rather than full-fledged world knowledge postulates to determine preferences over remaining ambiguities .","label":5,"label_text":"OWN"} +{"text":"The second type uses a window to collect training instances by observing how often a pair of nouns co-occur within some fixed number of words .","label":3,"label_text":"BAS"} +{"text":"A commonly-used technique for smoothing is deleted interpolation .","label":6,"label_text":"OTH"} +{"text":"Fully countable nouns , such as knife have both singular and plural forms , and cannot be used with determiners such as much .","label":4,"label_text":"BKG"} +{"text":"We maintain this property throughout the search process , that is , for every symbol A ' that we add to the grammar , we also add a rule.","label":5,"label_text":"OWN"} +{"text":"The second role can be captured by the parser constructing semantic representations directly .","label":5,"label_text":"OWN"} +{"text":"The basic idea is as follows .","label":5,"label_text":"OWN"} +{"text":"A follow-up to this work might be to apply similar tests in other languages to provide a further confirmation of the results , and to see if language families which similar characteristics can be identified .","label":5,"label_text":"OWN"} +{"text":"Conte and Castelfranchi 1993present several strategies of moving from obligations to actions , including : automatically performing an obligated action , adopting all obligations as goals , or adopting an obligated action as a goal only when performing the action results in a state desired by the agent .","label":6,"label_text":"OTH"} +{"text":"This corpus , implicitly defining the types of construction the grammar is intended to cover , was written by the linguist who developed the ANLT grammar and is used to check for any adverse effects on coverage when the grammar is modified during grammar development .","label":5,"label_text":"OWN"} +{"text":"The building blocks of genetic search discussed above are structured into the following algorithm , expressed in pseudo-Pascal :","label":6,"label_text":"OTH"} +{"text":"( The latter is kept track of by array normalization in the pseudocode . )","label":5,"label_text":"OWN"} +{"text":"If information of the first type is missing ( e.g. , because of problems during analysis , see section) , an assumption can be made locally by simulating the respective part of the input .","label":5,"label_text":"OWN"} +{"text":"First ,associates atoms only with final nodes of f-structures ; and asis a function , the atom so associated is unique .","label":5,"label_text":"OWN"} +{"text":"ContextKarttunen 1974,Kay 1992, procedural rulesGazdar 1979,Karttunen and Peters 1979, lexical and syntactic structureWeischedel 1979, intentionsHirschberg 1985, or anaphoric constraintsSandt 1992,Zeevat 1992decide what presuppositions or implicatures are projected as pragmatic inferences for the utterance that is analyzed .","label":6,"label_text":"OTH"} +{"text":"In the 1970 s and early 1980 s several computational implementations motivated the use of incremental interpretation as a way of dealing with structural and lexical ambiguity ( a survey is given inHaddock 1989) .","label":6,"label_text":"OTH"} +{"text":"Each program was tested on Fsequences of length 5 , 10 , 15 and 20 .","label":5,"label_text":"OWN"} +{"text":"An experiment is conducted with 160,000 word collocations to analyze compound nouns of with an average length of 4.9 characters .","label":5,"label_text":"OWN"} +{"text":"Recall that a Semitic stems consists of a root morpheme and a vocalism morpheme arranged according to a canonical pattern morpheme .","label":5,"label_text":"OWN"} +{"text":"In contrast to the ECD , the meaning of the collocate is represented by the lexical function only .","label":5,"label_text":"OWN"} +{"text":"In HPSG , the descriptions of complex expressions are constrained by principles .","label":5,"label_text":"OWN"} +{"text":"Are Irish , Manx , and Scottish Gaelic considered three separate languages for intrinsic linguistic reasons , or because they are spoken in different countries ?","label":4,"label_text":"BKG"} +{"text":"On the assumption that saw selects for a PP instrumental argument , we can derive this preference in the present model via the preference to attach as an argument as opposed to an adjunct .","label":5,"label_text":"OWN"} +{"text":"Finally , consider the counterpart of the completeness principle , the coherence principle .","label":5,"label_text":"OWN"} +{"text":"But , ideally , salience should depend on the context surrounding the referent .","label":1,"label_text":"CTR"} +{"text":"The anaphora-resolution and the topic-supplementation must also be realized in the analysis .","label":5,"label_text":"OWN"} +{"text":"Thus we have an appropriate level to perform substring coordination .","label":6,"label_text":"OTH"} +{"text":"It seems to me that this might be an idiomatic use .","label":5,"label_text":"OWN"} +{"text":"It happens because erroneous senses , metonymies , ... , accumulate evidence for the higher class .","label":5,"label_text":"OWN"} +{"text":"In particular , much of one 's behavior arises from a sense of obligation to behave within limits set by the society that the agent is part of .","label":5,"label_text":"OWN"} +{"text":"We put no boundaries upon the time when such a cancellation can occur , and we offer a unified explanation for pragmatic inferences that are inferable when simple utterances , complex utterances , or sequences of utterances are considered .","label":5,"label_text":"OWN"} +{"text":"When an agent performs a promise to perform an action , or performs an acceptance of a suggestion or request by another agent to perform an action , the agent obliges itself to achieve the action in question .","label":5,"label_text":"OWN"} +{"text":"In this derivation tree , the node labelledis a distinguished descendant of the root and is the first point belowat which the top symbol () of the ( unbounded ) stackis exposed .","label":4,"label_text":"BKG"} +{"text":"Since the bottom-up algorithm does not have a prediction step , there is no need for the costly operation of subsumption checking .","label":5,"label_text":"OWN"} +{"text":"More recently , the field of dialectometry , as introduced bySguy 1971,Sguy 1973, has addressed these issues by developing several techniques for summarizing and presenting variation along multiple dimensions .","label":6,"label_text":"OTH"} +{"text":"In other words , the hearer can understand a referring expression if its content uniquely describes an object that he knows about .","label":6,"label_text":"OTH"} +{"text":"The substitutional treatment of ellipsis presented here has broadly the same coverage asDalrymple et al.'s higher-order unification treatment , but has the computational advantages of","label":2,"label_text":"AIM"} +{"text":": source to target transfer","label":5,"label_text":"OWN"} +{"text":"The temporal relation between these continuations and the portion of earlier text they attach to is constrained along the lines sketched before .","label":5,"label_text":"OWN"} +{"text":"Understanding utterances standing in a Coherent Situation relation requires that hearers convince themselves that the utterances describe a coherent situation given their knowledge of the world .","label":5,"label_text":"OWN"} +{"text":"We are therefore forced to split this schematic phrase structure rule into two more specific rules at least during the optimization process .","label":5,"label_text":"OWN"} +{"text":"A simple , intuitive way of measuing the size of a hypothesis is to count the number of characters used to represent it .","label":6,"label_text":"OTH"} +{"text":"These measurements must be made by doing comparisons at the data level .","label":5,"label_text":"OWN"} +{"text":"Perhaps this is not too surprising , but it is useful to have an experimental confirmation that the linguistics matters rather than the engineering .","label":5,"label_text":"OWN"} +{"text":"This can be based on analyzing when the information that is repeated has been specifically requested , such as in the caller 's opening question or by a request for information from Harry .","label":5,"label_text":"OWN"} +{"text":"Our type of clustering , then , is based on the assumption that the utterances to be modeled , as sampled in a training corpus , fall more or less naturally into some number of clusters so that words or other objects associated with utterances have probability distributions that differ between clusters .","label":5,"label_text":"OWN"} +{"text":"The similarityalso increases with the co-occurrence tendency of words , for example :","label":5,"label_text":"OWN"} +{"text":"We then define the SDL-grammaras follows :","label":5,"label_text":"OWN"} +{"text":"Parsing a natural language sentence can be viewed as making a sequence of disambiguation decisions : determining the part-of-speech of the words , choosing between possible constituent structures , and selecting labels for the constituents .","label":4,"label_text":"BKG"} +{"text":"At the syntactic level , we allow atomic formulas to be labelled according to the same underlying lattice .","label":5,"label_text":"OWN"} +{"text":"We thank Mr. K. Nakamura , Mr. T. Fujita , and Dr. K. Kobayashi of NEC C & C Res. Labs. for their constant encouragement .","label":5,"label_text":"OWN"} +{"text":"Two tests were conducted with each combination of the degradation and similarity , using different corpora ( from the Penn treebank ) ranging in size from approximately 50000 words to 500000 words .","label":5,"label_text":"OWN"} +{"text":"As shown in Figure, bottle_1 and wine_1 have high activity in the pattern produced from the phrase `` red alcoholic drink '' .","label":5,"label_text":"OWN"} +{"text":"Reference networks have been successfully used as neural networks ( byVronis and Ide 1990for word sense disambiguation ) and as fields for artificial association , such as spreading activation ( byKozima and Furugori 1993for context-coherence measurement ) .","label":6,"label_text":"OTH"} +{"text":"The current approach accounts for these cases .","label":5,"label_text":"OWN"} +{"text":"Asdecreases , remote words get a larger effect .","label":5,"label_text":"OWN"} +{"text":"We assume that there exists some set, representing the set of word senses that an ideal human judge would conclude belong to the group of senses corresponding to the word grouping W .","label":5,"label_text":"OWN"} +{"text":"Form a new lambda expression by combining the lambda expression formed after parsing Wordwith the lexical semantics for Word","label":6,"label_text":"OTH"} +{"text":"There are different types of text .","label":4,"label_text":"BKG"} +{"text":"The two structures therefore fail to merge since the structure dominating the shared material TNT deliver must be identical .","label":1,"label_text":"CTR"} +{"text":"Having the set of tone transcriptions that are compatible with an utterance has considerable value to an analyst searching for invariances in the tonal assignments to individual morphemes .","label":6,"label_text":"OTH"} +{"text":"However , since conventional word-frequency-based abstract generation systemsKuhn 1958are lacking in inter-sentential or discourse-structural analysis , they are liable to generate incoherent abstracts .","label":1,"label_text":"CTR"} +{"text":"We assume that the first sentence has been processed , and concentrate on processing the fragment .","label":5,"label_text":"OWN"} +{"text":"In some cases , the sequence was not a noun compound ( nouns can appear adjacent to one another across various constituent boundaries ) and was marked as an error .","label":5,"label_text":"OWN"} +{"text":"While the former borrows from advanced linguistic specifications of syntax , the latter has been more concerned with extracting distributional regularities from language to aid the implementation of NLP systems and the analysis of corpora .","label":4,"label_text":"BKG"} +{"text":"Since collaborative agents are expected to engage in effective and efficient dialogues , the system should address the unaccepted belief that it predicts will most quickly resolve the top-level conflict .","label":5,"label_text":"OWN"} +{"text":"This paper describes the author 's implementation of a parser aimed at reproducing , in a computationally explicit system , the constraints of a particular psycholinguistic modelGorrell in press.","label":2,"label_text":"AIM"} +{"text":"If an input node is not already linked to the output node representing the desired response , it will be connected and the weight on the connection will be initialised to 1.0 .","label":5,"label_text":"OWN"} +{"text":"Thus orderly control shifts occur when the controller explicitly indicates that s \/ he wishes to relinquish control .","label":6,"label_text":"OTH"} +{"text":"The GLR* parser has a capability for handling common word substitutions when the parser 's input string is the output of a speech recognition system .","label":6,"label_text":"OTH"} +{"text":"All three parsers accept grammars written in the ANLT formalismBriscoe et al. 1987a, and the first two are distributed as part of the ANLT package .","label":6,"label_text":"OTH"} +{"text":"The semantic relation between the phrase heavy smoker and its French counterpart can be made explicit in the following bilingual sign :","label":5,"label_text":"OWN"} +{"text":"In these cases the anaphoric expression is resolved on purely semantic grounds ; therefore VP-ellipsis is only constrained to having a suitable semantic antecedent .","label":5,"label_text":"OWN"} +{"text":"The recursion terminates if Id is the special rule identifier lex and thus dominates a word of the training sentence , rather than a list of subtrees .","label":5,"label_text":"OWN"} +{"text":"We found that no two case slots are determined as dependent in any of the case frame patterns .","label":5,"label_text":"OWN"} +{"text":"To see this , assume ( by induction ) that all four of the daughter nonterminals are associated with the full binary tree of height i () .","label":5,"label_text":"OWN"} +{"text":"However , we feel the largest contribution of this work does not lie in the actual algorithm specified , but rather in its indication of the potential of the induction framework described bySolomonoffin 1964 .","label":3,"label_text":"BAS"} +{"text":"As an additional test , we tried assigning the most probable tag from the D0 lexicon , completely ignoring tag-tag transitions .","label":5,"label_text":"OWN"} +{"text":"This second step replaces the use of the independence assumption in the original back-off model .","label":5,"label_text":"OWN"} +{"text":"Let,","label":5,"label_text":"OWN"} +{"text":"Our analysis ofPartee's quantification problem uses a different notion of reference time than that used by the accounts in the exposition above .","label":1,"label_text":"CTR"} +{"text":"The best of these methods are reported to achieve 82 - 85 % of tagging accuracy on unknown wordsBrill 1995,Weischedel et al. 1993.","label":6,"label_text":"OTH"} +{"text":"Preceding this list is a single integer denoting the length of each length field ; this integer is represented in unary , so that its length need not be known in advance .","label":5,"label_text":"OWN"} +{"text":"It makes no sense to apply the substitutions before the antecedent is fully resolved , though it does make sense to decide what the appropriate substitutions should be .","label":5,"label_text":"OWN"} +{"text":"The former figure looks more impressive , but the latter gives a better measure of how well the tagger is doing , since it factors out the trivial assignment of tags to non-ambiguous words .","label":5,"label_text":"OWN"} +{"text":"not requiring order-sensitive interleaving of different resolution operations , and","label":2,"label_text":"AIM"} +{"text":"At this point Jane realizes that Mary misunderstands her : all the time Jane was talking about John Pevler , the five-year-old boy .","label":5,"label_text":"OWN"} +{"text":"In German most of the effort is going into subclassification within major word classes , while in English and French a good deal of disambiguation work is devoted to separate major word classes .","label":5,"label_text":"OWN"} +{"text":"We developed a simple , extremely compact and efficient guesser for French .","label":5,"label_text":"OWN"} +{"text":"Multi-layer networks , which can process linearly inseparable data , were also investigated , but are not necessary for this particular processing task .","label":5,"label_text":"OWN"} +{"text":"that is , the probability that head h has n r-dependents .","label":5,"label_text":"OWN"} +{"text":"The ` glottal _ change ' rule would be a normal morphological spelling change rule , incorporating contextual constraints ( e.g. for the morpheme boundary ) as necessary .","label":5,"label_text":"OWN"} +{"text":"This does not change the semantics of the original grammar as it merely serves as a way to incorporate the relevant bindings derived with the magic predicates to avoid redundant applications of a rule .","label":6,"label_text":"OTH"} +{"text":"Semantic information is expressed in","label":4,"label_text":"BKG"} +{"text":"We capture the attempt to resolve a conflict with the problem-solving action Modify-Proposal , whose goal is to modify the proposal to a form that will potentially be accepted by both agents .","label":5,"label_text":"OWN"} +{"text":"This requires a reasonable definition of verb similarity and a similarity estimation method .","label":6,"label_text":"OTH"} +{"text":"C1 can not be evaluated prior to the head and once H is evaluated it is no longer possible to evaluate C1 prior to C2 .","label":5,"label_text":"OWN"} +{"text":"Examplealso illustrates this change from specific (,,) to general.","label":5,"label_text":"OWN"} +{"text":"This can be done using for example interval bisection .","label":5,"label_text":"OWN"} +{"text":"The maximum likelihood ( ML ) estimation principle is thus the natural tool to determine the centroid distributions.","label":5,"label_text":"OWN"} +{"text":"However , once we have started to think in terms of merging , there is an obvious next step , which is to move from merging of word strings to merging of syntax trees .","label":5,"label_text":"OWN"} +{"text":"Figuredefines a valuation relation for the QLF fragment used above , derived fromAlshawi and Crouch 1992,Cooper et al. 1994a.","label":5,"label_text":"OWN"} +{"text":"While this framework does not restrict us to a particular grammar formalism , in our work we consider only probabilistic context-free grammars .","label":5,"label_text":"OWN"} +{"text":"If we are to implement a probabilistic version of a modular grammar theory incorporating a Case component , a relevant question is : are there multiple ways of assigning Case to noun phrases in a sentence ?","label":5,"label_text":"OWN"} +{"text":"In addition , the paths with a value of a maximal specific type for which there are no appropriate features specified , for example , the path cat , can be considered bound :","label":5,"label_text":"OWN"} +{"text":"Utterances which are intended to elicit information , including indirect forms such as I was wondering whether I should ...","label":6,"label_text":"OTH"} +{"text":"The results of the evaluation are extremely encouraging , especially considering that disambiguating word senses to the level of fine-grainedness found in WordNet is quite a bit more difficult than disambiguation to the level of homographsHearst 1991,Cowie et al. 1992.","label":1,"label_text":"CTR"} +{"text":"Evaluation :","label":5,"label_text":"OWN"} +{"text":"To upgrade this robust parser we proposed heuristics through the analysis on the Penn treebank corpus .","label":5,"label_text":"OWN"} +{"text":"The number of readings obtained for ` John revised his paper before the teacher did , and then Simon did ' was used as a benchmark byDalrymple et al..","label":6,"label_text":"OTH"} +{"text":"Suppose that we generalize LIG to allow the stack to be passed from the mother to two daughters .","label":5,"label_text":"OWN"} +{"text":"This general tendency is also observed in another example thesaurus obtained by our method , shown in Figure.","label":5,"label_text":"OWN"} +{"text":"Thus , provision of an effective method of learning dependencies between case slots , as well as investigation of the usefulness of the acquired dependencies in disambiguation and other natural language processing tasks would be an important contribution to the field .","label":2,"label_text":"AIM"} +{"text":"The improvement we achieved for a bigram model is statistically significant , though modest in its overall effect because of the small proportion of unseen events .","label":5,"label_text":"OWN"} +{"text":"Figureillustrates the system from the viewpoint of the dialogue manager .","label":6,"label_text":"OTH"} +{"text":"This is not difficult to determine , although the standard methods do not support automatic determination of standard deviation or skewness as discrimination criteria .","label":5,"label_text":"OWN"} +{"text":"Transcribed natural speech contains a number of frequent characteristic ` ungrammatical ' phenomena : filled pauses , repetitions , restarts , etc. ( as in e.g.","label":4,"label_text":"BKG"} +{"text":"These semantic properties can be characterized by a restructuration of the space induced by the so-called reference location ( lref )Talmy 1983.","label":5,"label_text":"OWN"} +{"text":"A text is not just a sequence of words , but it also has coherent structure .","label":4,"label_text":"BKG"} +{"text":"We do not get a sixth , implausible reading , provided that in the first clause his is resolved as being coindexed with the for John ; i.e. that John and his do not both independently refer to the same individual .","label":5,"label_text":"OWN"} +{"text":"collect word collocations , at this time we collect only patterns of word collocations , but we do not care about occurrence frequencies of the patterns .","label":5,"label_text":"OWN"} +{"text":"Two of the main principles of the algorithm are :","label":5,"label_text":"OWN"} +{"text":"There were a total of 96 such disagreements .","label":5,"label_text":"OWN"} +{"text":"This representation allows for easy checking the coincidence between a chosen default and input given later .","label":5,"label_text":"OWN"} +{"text":"Paradigme is systematically constructed from Glossme , a subset of an English dictionary .","label":5,"label_text":"OWN"} +{"text":"In this dialogue , the system needed only to follow the initiative of the user .","label":5,"label_text":"OWN"} +{"text":"There are currently four error rules , corresponding to the fourDamerautransformations : omission , insertion , transposition , substitutionDamerau 1964- considered in that orderPollock 1983.","label":3,"label_text":"BAS"} +{"text":"The language model factors the statistical derivation of a sentence with word string W as follows :where C ranges over relation graphs .","label":5,"label_text":"OWN"} +{"text":"Note that there is no garden path effect even if the preposition is separated from the disambiguating head noun by a series of adjectives : ( `` I saw the man with the neat , quaint , old-fashioned moustache \/ telescope '' ) .","label":5,"label_text":"OWN"} +{"text":"Given the representation in Figureas the source , the semantics for the missing VP may be recovered in one of two ways .","label":5,"label_text":"OWN"} +{"text":"These parameters - languages with a complex morphology \/ syntax interface but a limited number of affix combinations , tasks where the lexicon is not necessarily known at compile time , bidirectional processing , and the need to ease development rather than optimize run-time efficiency - dictate the design of the morphology compiler described in this paper , in which spelling rules and possible affix combinations ( itemsand) , but not the lexicon ( item) , are composed in the compilation phase .","label":2,"label_text":"AIM"} +{"text":"A typical schema of SEM of FS of this type is the following .","label":5,"label_text":"OWN"} +{"text":"The first supplies each edge in the chart with two indices , a backward index pointing to the state in the chart that the edge is predicted from , and a forward index pointing to the states that are predicted from the edge .","label":5,"label_text":"OWN"} +{"text":"There is a wealth of formul to compute readability .","label":5,"label_text":"OWN"} +{"text":"The hierarchical planning is realized by so-called top-down presentation operators that split the task of presenting a particular proof into subtasks of presenting subproofs .","label":5,"label_text":"OWN"} +{"text":"In 15 cases , these shifts to global focus are marked syntactically with a cue word such as Now , and are not marked in 5 cases .","label":5,"label_text":"OWN"} +{"text":"In some cases the divisions were more than two-way : for example ,Wagnerdistinguishes whether the final consonant in creic is unpalatalized , palatalized , or slightly palatalized .","label":5,"label_text":"OWN"} +{"text":"So an obvious question is when the two algorithms actually make different predictions .","label":5,"label_text":"OWN"} +{"text":"Notice that our predicted Viterbi parse can stray a great deal from the actual Viterbi parse , as errors can accumulate as move after move is applied .","label":5,"label_text":"OWN"} +{"text":"Although the time and space complexities of CF versions of the LR and CE parsers are, the unification versions of these parsers both turn out to have time bounds that are greater than cubic , in the general case .","label":5,"label_text":"OWN"} +{"text":"Since a participant must interrupt if any condition for an interrupt holds , then lack of interruption signals that there is no discrepancy in mutual beliefs .","label":5,"label_text":"OWN"} +{"text":"Traditional dialectological methodology gives little guidance as to how to perform such reduction to one dimension .","label":1,"label_text":"CTR"} +{"text":"As noted in Section, this group represents a set of words similar to burglar , according toSchuetze's method for deriving vector representation from corpus behavior .","label":6,"label_text":"OTH"} +{"text":"Atthe two copies ofare distributed across the daughters .","label":5,"label_text":"OWN"} +{"text":"Further , it cannot be assumed that the lexicon has been fully specified when the morphology rules are compiled .","label":5,"label_text":"OWN"} +{"text":"The substitutions can be built up in an order-independent way ( i.e. before , after or during scoping ) , and without recourse to higher-order unification .","label":5,"label_text":"OWN"} +{"text":"There are probably many reasons why performance is much better than the complexity results suggest , but the most important may be that :","label":5,"label_text":"OWN"} +{"text":"If possible , she should use the suggestion to elaborate the plan , thus avoiding unwanted conversational implicature , but its use may not be enough to make the plan adequate .","label":5,"label_text":"OWN"} +{"text":"Biber 1993applies factor analysis to collocations of two target words ( `` certain '' and `` right '' ) with their immediate neighbors .","label":6,"label_text":"OTH"} +{"text":"Coordination of this kind is traditionally split into constituent coordination , where each conjunct forms a constituent according to ` standard ' phrase structure grammars , and non-constituent coordination .","label":4,"label_text":"BKG"} +{"text":"One possibility , which has been explored mainly within the Categorial Grammar traditionSteedman 1988is to provide a grammar which can treat most if not all initial fragments as constituents .","label":6,"label_text":"OTH"} +{"text":"This problem has been investigated in the area of machine learning and related fields .","label":6,"label_text":"OTH"} +{"text":"If there exists a stateinandis a nonterminal then addintoif possible .","label":5,"label_text":"OWN"} +{"text":"For analysis and generation , we are treating strings s and logical formsas object level entities .","label":5,"label_text":"OWN"} +{"text":"Our working prototype indicates that the methods described here are worth developing , and that connectionist methods can be used to generalise from the training corpus to unseen text .","label":2,"label_text":"AIM"} +{"text":"In the rules above , ` X ' is the shifted vowel .","label":5,"label_text":"OWN"} +{"text":"The following table summarizes the properties of these five combination schemes .","label":5,"label_text":"OWN"} +{"text":"This is the test phase .","label":5,"label_text":"OWN"} +{"text":"The algorithm is applied on the text sentence by sentence , i.e. the ambiguities of the previous sentences have already been considered ( resolved or not ) .","label":5,"label_text":"OWN"} +{"text":"Moreover , despite the fact that the training is performed on a particular lexicon and a particular corpus , the obtained guessing rules suppose to be domain and corpus independent and the only training-dependent feature is the tag-set in use .","label":5,"label_text":"OWN"} +{"text":"But since stack length is at least proportional to the length of the input string , the resultant algorithm would exhibit exponential space and time complexity in the worst case .","label":6,"label_text":"OTH"} +{"text":"In fact the probability to find the system at a given configuration is exponential in F","label":5,"label_text":"OWN"} +{"text":"We aim to develop an efficient storing mechanism using a hierarchy of locally intersecting core descriptions .","label":5,"label_text":"OWN"} +{"text":"The same theory can be interpreted from a perspective that allows more freedom ( u-satisfaction ) , or from a perspective that is tighter and that signals when some defeasible information has been cancelled ( i - and d-satisfaction ) .","label":5,"label_text":"OWN"} +{"text":"Terms and indices not dischargeable in this manner lead to uninterpretable QLFsAlshawi and Crouch 1992.","label":5,"label_text":"OWN"} +{"text":"Here , we treat and as a binary relation .","label":5,"label_text":"OWN"} +{"text":"However , there are cases where a single constituent appears to yield more than one contribution to the meaning of an utterance .","label":1,"label_text":"CTR"} +{"text":"The outcome of the two experiments together points to heuristics for making effective use of training and re-estimation , together with some directions for further research .","label":5,"label_text":"OWN"} +{"text":"For words which failed to be guessed by the guessing rules we applied the standard method of classifying them as common nouns ( NN ) if they are not capitalised inside a sentence and proper nouns ( NP ) otherwise .","label":5,"label_text":"OWN"} +{"text":"to select a parse that reflected the lexical changes that had been undergone , e.g. the greater likelihood of an NP featuring in the verb 's theta grid .","label":5,"label_text":"OWN"} +{"text":"A head-driven generator has to rely on a similar solution , as it will not be able to find a successful ordering for the local trees either , simply because it does not exist .","label":5,"label_text":"OWN"} +{"text":"Other research directed towards improving the throughput of unification-based parsing systems has been concerned with the unification operation itself , which can consume up to 90 % of parse timeTomabechi 1991in systems using lexicalist grammar formalisms ( e.g. HPSG )Pollard and Sag 1987.","label":6,"label_text":"OTH"} +{"text":"Applicative Categorial Grammar is the most basic form of Categorial Grammar , with just a single combination rule corresponding to function application .","label":6,"label_text":"OTH"} +{"text":"That is , we used it to learn the conditional distributions,, whereandvary over the internal nodes in a certain ` cut ' in the thesaurus tree .","label":3,"label_text":"BAS"} +{"text":"Green's algorithm makes use of discourse expectations , discourse plans , and discourse relations .","label":6,"label_text":"OTH"} +{"text":"In the example discussed above , the meaning of the ellipsis is built up in the same way as for the antecedent , except that whenever you encounter a term corresponding to ` John ' or something dependent \/ co-indexed with it , you it is treated as though it were the term for ` Mary ' or dependent \/ co-indexed with it .","label":5,"label_text":"OWN"} +{"text":"One might wonder whether the subject is She or Mr. Vale .","label":1,"label_text":"CTR"} +{"text":"These include :","label":5,"label_text":"OWN"} +{"text":"Consider the sentence :","label":1,"label_text":"CTR"} +{"text":"There is no need to explicitly rule out, as the transition NP [][ N ] will be vanishingly rare in any corpus of even the most garbled speech , while the transition N [][ S ( rel ) ] is commonly met with in both written and spoken English .","label":5,"label_text":"OWN"} +{"text":"The two minor types are subsets of fully countable and uncountable nouns respectively .","label":4,"label_text":"BKG"} +{"text":"This naturally leads to the question of how best to introduce quantitative modeling into language processing .","label":5,"label_text":"OWN"} +{"text":"For example ,Hirschberg 1985has shown that in order to understand a scalar implicature , one must analyze the conversants ' beliefs and intentions .","label":4,"label_text":"BKG"} +{"text":"The order in which the nodes of the example sentence are constructed is indicated in the figure .","label":5,"label_text":"OWN"} +{"text":"This rule is represented in the grammar as a part of the description of subcategorization frames for verbs .","label":5,"label_text":"OWN"} +{"text":"The processing of in refines this set to rabbits which are in something .","label":6,"label_text":"OTH"} +{"text":"At best , isoglosses for different features approach each other , forming vague bundles ; at worst , isoglosses may cut across each other , describing completely contradictory binary divisions of the dialect area .","label":1,"label_text":"CTR"} +{"text":"This conditional probability can be expressed as followsChang and Su 1993:","label":5,"label_text":"OWN"} +{"text":"Intuitivelycorefer with.","label":5,"label_text":"OWN"} +{"text":"Words in a language are organized by two kinds of relationship .","label":4,"label_text":"BKG"} +{"text":"However , consider the magic rulesandin figure.","label":5,"label_text":"OWN"} +{"text":"Constraints in Tableare also local in a main clause because every semantic role that appeares in the righthand side of the constraints is defined within the main clause .","label":5,"label_text":"OWN"} +{"text":"For example , if a natural language understanding system is interfaced with a speech recognition component , chances are that this compenent is uncertain about the actual string of words that has been uttered , and thus produces a word lattice of the most promising hypotheses , rather than a single sequence of words .","label":4,"label_text":"BKG"} +{"text":"We conclude this paper with the following remarks .","label":0,"label_text":"TXT"} +{"text":"As the conjoinable categories become more complex , the and entries become correspondingly more complex and greatly obscure the theoretical background of the grammar formalism .","label":1,"label_text":"CTR"} +{"text":"When adding to statesets , if stateis a candidate for admission to a stateset which already has a similar memberand e 'e , thenis rejected .","label":6,"label_text":"OTH"} +{"text":"Considering the first of these points , namely a close relation to a simple probabilistic model , a good place to start the search might be with a right-branching finite-state grammar .","label":5,"label_text":"OWN"} +{"text":"The results for each RAND simulation are averages over 1,000 trials on each input sample .","label":5,"label_text":"OWN"} +{"text":"The parse trees are implicit in the sense that each node in the tree is the ( mnemonic ) name of the grammar rule resolved on at that point , rather than the syntactic category of the LHS of the grammar rule as is the case in an ordinary parse tree .","label":6,"label_text":"OTH"} +{"text":"Consider the following example :","label":4,"label_text":"BKG"} +{"text":"A threshold value of say 1.00 in our example will yield the set of cutnodesand result in the set of specialized rules of Figure.","label":5,"label_text":"OWN"} +{"text":"We can show Cut Elimination for this calculus by a straight-forward adaptation of the Cut elimination proof for .","label":5,"label_text":"OWN"} +{"text":"In sectiontheir experimental evaluation is presented .","label":0,"label_text":"TXT"} +{"text":"Each collocate subentry bears the value of the lexical function in its semantics field .","label":5,"label_text":"OWN"} +{"text":"The purpose of the experiment was to estimate SPATTER 's ability to learn the syntax for this domain directly from a treebank , instead of depending on the interpretive expertise of a grammarian .","label":5,"label_text":"OWN"} +{"text":"This schema gives rise to various actual rules whose semantics depends on the number of arguments that the shared material takes .","label":6,"label_text":"OTH"} +{"text":"If Narration (A,B) then.","label":5,"label_text":"OWN"} +{"text":"The default rule that copies characters between surface and lexical levels and the boundary rule that deletes boundary markers are both optional .","label":5,"label_text":"OWN"} +{"text":"Distributional neighborhoodSchuetze 1993: burglars , thief , rob , mugging , stray , robbing , lookout , chase , crate .","label":6,"label_text":"OTH"} +{"text":"Class construction is then combinatorially very demanding and depends on frequency counts for joint events involving particular words , a potentially unreliable source of information as we noted above .","label":1,"label_text":"CTR"} +{"text":"The output is a new logical form representing the context as a whole , with all variables correctly bound .","label":5,"label_text":"OWN"} +{"text":"RULES","label":6,"label_text":"OTH"} +{"text":"The best evaluations found are given below :","label":5,"label_text":"OWN"} +{"text":"In other words , the genes A and B are cut at a position determined by r and the first part of A is spliced with the second part of B to create a new gene .","label":6,"label_text":"OTH"} +{"text":"These triples were manually analysed using as context the entire article in which they appeared .","label":5,"label_text":"OWN"} +{"text":"Slightly more generally , PLTG can generate the language","label":5,"label_text":"OWN"} +{"text":"Entropy is well-known from physics , but the concept of perplexity is perhaps better known in the speech-recognition and natural-language communities .","label":6,"label_text":"OTH"} +{"text":"The starting point for this work wasScha and Polanyi's discourse grammarScha and Polanyi 1988,Pruest et al. 1994.","label":3,"label_text":"BAS"} +{"text":"Formally , we say that the u level is stronger than the i level , which is stronger than the d level :.","label":5,"label_text":"OWN"} +{"text":"For each case , they were given the full set of nouns in the numbered category ( as shown above ) together with descriptions of the WordNet senses for the word to be disambiguated ( as , for example , the list of 25 senses for line given in the previous section , though thankfully few words have that many senses ! ) .","label":5,"label_text":"OWN"} +{"text":"Whentakes on 1 or 0 as its value , we call the model a ` slot-based model '.","label":5,"label_text":"OWN"} +{"text":"The results of their on - and off-line experiments show clearly that the low attachment ( corresponding to) is easiest , but the middle attachment ( corresponding to) is most difficult .","label":5,"label_text":"OWN"} +{"text":"I would like to thank Stuart Shieber and Barbara Grosz for valuable discussions and comments on earlier drafts .","label":5,"label_text":"OWN"} +{"text":"Most input nodes are connected to both outputs , since most tuples occur in both grammatical and ungrammatical strings .","label":5,"label_text":"OWN"} +{"text":"The two texts used for training the HMM were selected from the German data contained on theECI's Multilingual CD-ROMECI 1994: a 200,000 and 2,000,000 word sample from Summer 1992 issues of the German newspaper Frankfurter Rundschau .","label":5,"label_text":"OWN"} +{"text":"Now consider :","label":4,"label_text":"BKG"} +{"text":"That means that if an interpretationmakes an utterance true by assigning to a relation R a defeasible status , while another interpretationmakes the same utterance true by assigning the same relation R a stronger status ,will be the preferred or optimistic one , because it is as informative asand it allows more options in the future ( R can be defeated ) .","label":5,"label_text":"OWN"} +{"text":"However , on the assumption that incorrect prediction of gaps is the main avoidable source of performance degradation ( c.f.Moore and Dowding) , further investigation shows that the speed-up is near the maximum that is possible with the ANLT grammar ( around 50 % ) .","label":5,"label_text":"OWN"} +{"text":"GLR* is a recently developed robust version of the Generalized LR ParserTomita 1986, that can parse almost any input sentence by ignoring unrecognizable parts of the sentence .","label":6,"label_text":"OTH"} +{"text":"They identified strategies that a system can adopt in justifying its beliefs ; however , they did not specify the criteria under which each of these strategies should be selected .","label":1,"label_text":"CTR"} +{"text":"However , further research is needed to determine how realistic these estimates turn out to be .","label":5,"label_text":"OWN"} +{"text":"The-Prolog code fragment shown in Figuredeclares how the CCG logical forms are represented .","label":5,"label_text":"OWN"} +{"text":"Tense in VP-ellipsis illustrates how categories can be put to work","label":5,"label_text":"OWN"} +{"text":"Two of these 6 interjections were to supply extra information and one was marked with the cue `` as well '' .","label":5,"label_text":"OWN"} +{"text":"These membranes becomes their scopes for case ( or role ) domination ; namely , each verb searches for molecules ( noun phrases ) that are necessary to satisfy each verb 's case ( role ) frame , within its membrane .","label":5,"label_text":"OWN"} +{"text":"We then went on to analyse how control was exchanged between participants at the boundaries of these phases .","label":5,"label_text":"OWN"} +{"text":"A major component of any discourse algorithm is the prediction of which entities are salient , even though all the factors that contribute to the salience of a discourse entity have not been identifiedPrince 1981,Prince 1985,Brown and Fish 1983,Hudson et al. 1986.","label":4,"label_text":"BKG"} +{"text":"This paper puts forward an architecture that combines several established NL generation techniques adapted for a particular application , namely the presentation of ND style proofs .","label":2,"label_text":"AIM"} +{"text":"The tests were run using Good-Turing correction to the probability estimates ; that is , rather than estimating the probability of the transition from a tag i to a tag j as the count of transition from i to j in the training corpus divided by the total frequency of tag i , one was added to the count of all transitions , and the total tag frequencies adjusted correspondingly .","label":5,"label_text":"OWN"} +{"text":"We argue that the resource sharing that is commonly manifest in semantic accounts of coordination is instead appropriately handled in terms of structure-sharing in LFG f-structures .","label":1,"label_text":"CTR"} +{"text":"` pair ' pluralia tanta have a singular form when used as modifiers ( a scissor movement ) .","label":4,"label_text":"BKG"} +{"text":"Thus the pair `` most < adjective > '' is taken as a single superlative adjective .","label":5,"label_text":"OWN"} +{"text":"Empty or displaced heads pose us no problem , since the optimal evaluation order of the right-hand side of a rule is determined regardless of the head .","label":5,"label_text":"OWN"} +{"text":"In our analysis , the f-structure for two trade bills is resource-shared as the object of the two verbs , just as it is in the coordinated case .","label":5,"label_text":"OWN"} +{"text":"Although there have been many methods of word clustering proposed to date , their objectives seem to vary .","label":5,"label_text":"OWN"} +{"text":"Of course , not all acquired rules are equally good as plausible guesses about word-classes : some rules are more accurate in their guessings and some rules are more frequent in their application .","label":5,"label_text":"OWN"} +{"text":"It was found that triples alone gave as good results as pairs and triples together .","label":5,"label_text":"OWN"} +{"text":"Development , compilation and run-time efficiency are quite acceptable , and the use of rules containing complex feature-augmented categories allows morphotactic behaviours and non-segmental spelling constraints to be specified in a way that is perspicuous to linguists , leading to rapid development of descriptions adequate for full NLP .","label":5,"label_text":"OWN"} +{"text":"( I hand-selected it from that group for presentation here , however . )","label":6,"label_text":"OTH"} +{"text":"However ,Solomonoffdoes not specify a concrete search algorithm and only makes suggestions as to its nature .","label":1,"label_text":"CTR"} +{"text":"Dynamic indexing of partial and complete constituents on category types to avoid attempting unification or subsumption operations which static analysis shows will always fail .","label":6,"label_text":"OTH"} +{"text":"We illustrate the general difficulties encountered with quantitative evaluation .","label":5,"label_text":"OWN"} +{"text":"In addition to the common practice of mapping POS tags according to the words ' suffixes , this implementation makes use of the case of the initial letter of a word -- which is highly significant for POS assignment in German .","label":5,"label_text":"OWN"} +{"text":"At the log-likelihood maximum , the variationmust vanish .","label":5,"label_text":"OWN"} +{"text":"compromise completeness","label":5,"label_text":"OWN"} +{"text":"It was noted above that substitutions on term indices in scope nodes ensures scope parallelism .","label":4,"label_text":"BKG"} +{"text":"Secondly , it can highlight anomalous aspects of a given system .","label":5,"label_text":"OWN"} +{"text":"The choice between the two is forced by the presence or absence of a focussed item .","label":5,"label_text":"OWN"} +{"text":"Therefore unlike the `` garu \/ gat-ta '' case , the experiencer also can be an obligatory semantic role of higher clause as well as the speaker .","label":5,"label_text":"OWN"} +{"text":"On the other hand , the different characteristics of within-language and between-language associations account for the independent functional behavior .","label":6,"label_text":"OTH"} +{"text":"However , the conditions under which a representation of an utterance may serve as a suitable basis for interpreting subsequent elliptical forms remain poorly understood ; specifically , past attempts to characterize these processes within a single traditional module of language processing ( e.g. , considering either syntax , semantics , or discourse in isolation ) have failed to account for all of the data .","label":1,"label_text":"CTR"} +{"text":"As is often the case where sense ambiguity is involved , we as readers impose the most coherent interpretation on the words within the group without being aware that we are doing so .","label":4,"label_text":"BKG"} +{"text":"Checking whether the intersection is empty or not is then usually very simple as well : only in the latter case will the parser terminate succesfully .","label":4,"label_text":"BKG"} +{"text":"The testing data was collected for an independent taskJain 1991.","label":5,"label_text":"OWN"} +{"text":"A final transformation consists in associating a given surface form with its ambiguity class , i.e. with the alphabetically ordered sequence of all its possible tags .","label":5,"label_text":"OWN"} +{"text":"For example , expressions like `` ... 3 reasons . First , ... Second , ... Third , ... '' , and `` ... Of course , ... But , ... '' are extracted and the structural constraint is added onto the sequence so as to form a chunk between the expressions .","label":5,"label_text":"OWN"} +{"text":"This notion has been central to a number of approaches to grammar for some time , including theories like dependency grammarHudson 1976,Hudson 1990and HPSGPollard and Sag 1987.","label":6,"label_text":"OTH"} +{"text":"Morphology also needs to be well integrated with other processing levels .","label":5,"label_text":"OWN"} +{"text":"This need motivates research on fully automatic text processing that may rely on general principles of linguistics and computation , but does not depend on knowledge about individual words .","label":4,"label_text":"BKG"} +{"text":"The sentence is false in the case where out of ten women , one owns 50 cats and is happy , while the other nine women own only one cat each , and are miserable .","label":6,"label_text":"OTH"} +{"text":"The- variable in g can be shifted beyond the scope of f so that we can concatenate f and g first , and , thus , have a become applicable as in Fig..","label":5,"label_text":"OWN"} +{"text":"There are two important questions which arise at the rule acquisition stage - how to choose the scoring thresholdand what is the performance of the rule-sets produced with different thresholds .","label":5,"label_text":"OWN"} +{"text":"( Pronouns , like proper names , are treated as contextually restricted quantifiers , where the contextual restriction may limit the domain of quantification to one individual . )","label":5,"label_text":"OWN"} +{"text":"This is simulated by a bottom-up operator called Simplify-Bottom-Up .","label":5,"label_text":"OWN"} +{"text":"Proposition","label":5,"label_text":"OWN"} +{"text":"As can be seen , the dependency model is more accurate than the adjacency model .","label":1,"label_text":"CTR"} +{"text":"vocalised texts incorporate full vocalisation , e.g.tada ' 043 ra for \/tada ' 043 ra\/ .","label":4,"label_text":"BKG"} +{"text":"Based on corpus statistics , they provide analogies between words that often agree with our linguistic and domain intuitions .","label":6,"label_text":"OTH"} +{"text":"Appropriateness encodes a relationship between the denotations of species and attributes : ifis defined then the denotation of attributeacts upon each object in the denotation of speciesto yield an object in the denotation of type, but ifis undefined then the denotation of attributeacts upon no object in the denotation of species.","label":5,"label_text":"OWN"} +{"text":"selects the most effective aspect to address in its pursuit of conflict resolution when multiple conflicts exist ,","label":5,"label_text":"OWN"} +{"text":"Heuristics similar to those described byHardt 1992may be used for this .","label":5,"label_text":"OWN"} +{"text":"This work has principally been developed on text of technical manuals from Perkins Engines Ltd. , which have been translated by a semi-automatic processPym 1993.","label":5,"label_text":"OWN"} +{"text":"However initial tests over a small file of constructed errors showed that the error rules did just as well ( slightly better in fact ) at choosing the ` correct correction ' .","label":5,"label_text":"OWN"} +{"text":"This paper will focus onleavingfor future work .","label":2,"label_text":"AIM"} +{"text":"This means that in the 15 out of 20 cases where the shift to global focus is identifiably marked with a cue-word such as now , the segment rules will allowBrennan et al.to get the global focus examples .","label":5,"label_text":"OWN"} +{"text":"To apply the centering theory that is originally for a sequence of sentences , namely discourse , we regard the subordinate clause and the main clause as a segment of discourse respectively .","label":6,"label_text":"OTH"} +{"text":"The results for four kanzi character words are almost equal .","label":5,"label_text":"OWN"} +{"text":"As evidence for these claims , I present results showing that clustering improves some models but not others for the ATIS domain .","label":5,"label_text":"OWN"} +{"text":"These mixed-initiative features make these sequences inherently different than text .","label":5,"label_text":"OWN"} +{"text":"For this reason rules are marked as to whether they can occur more than once .","label":5,"label_text":"OWN"} +{"text":"Examples of these relations are given in sentences-.","label":5,"label_text":"OWN"} +{"text":"Here the value of ` 1 ' indicates the presence of the case slot in question , and ` 0 ' absence .","label":5,"label_text":"OWN"} +{"text":"A large part of the art of probabilistic language modelling resides in the management of the trade-off between descriptive power ( which has the merit of allowing us to make the discriminations which we want ) and independence assumptions ( which have the merit of making training practical by allowing us to treat similar situations as equivalent ) .","label":4,"label_text":"BKG"} +{"text":"However ,Carterhas proposed some extensions toSidner's algorithm for local focusing that seem to be relevant hereCarter 1987.","label":6,"label_text":"OTH"} +{"text":"To achieve this the ends of the wordstring delimited by the purview need to be treated differently .","label":5,"label_text":"OWN"} +{"text":"However , these search errors conveniently occur on sentences which SPATTER is likely to get wrong anyway , so there isn't much performance lossed due to the search errors .","label":5,"label_text":"OWN"} +{"text":"Each unit has one head-part and several det-parts .","label":5,"label_text":"OWN"} +{"text":"Deixis serves to pick out objects that cannot be selected by the use of standard anaphora , i.e. we should expect the referents for deixis to be outside immediate focus and hence more likely to be outside the current segmentWebber 1986.","label":5,"label_text":"OWN"} +{"text":"Tablesketches definitions for some Common Topic relations , some taken from and others adapted fromHobbs 1990.","label":5,"label_text":"OWN"} +{"text":"assigning thesaurus categories provides :","label":5,"label_text":"OWN"} +{"text":"This state is no longer an atomic eventuality .","label":5,"label_text":"OWN"} +{"text":"Other choices , involving sister rules and \/ or rules in less closely related positions , or the compilation of rules into common combinationsSamuelsson and Rayner 1991might have worked as well or better ; our purpose here is simply to illustrate and assess ways in which explicit context modeling can be combined with clustering .","label":6,"label_text":"OTH"} +{"text":"The algorithm is constructed in such a way that lowering is only attempted in cases where simple attachment fails .","label":5,"label_text":"OWN"} +{"text":"But there is no point in processing these later hypotheses since we will be forced to select one interpretation essentially at random .","label":5,"label_text":"OWN"} +{"text":"The constraints that we identify for the tree-based system can be regarded equally well as constraints on unification-based grammar formalisms such as PATRShieber 1984.","label":5,"label_text":"OWN"} +{"text":"We refer to this formalism as Partially Linear Tree Grammars ( PLTG ) .","label":5,"label_text":"OWN"} +{"text":"By restricting the matrices,, andto their first m # LT k columns ( = principal components ) one obtains the matrices T , S , and D .","label":5,"label_text":"OWN"} +{"text":"A text generator based on the local organization , in contrast , repeatedly chooses a part of the remaining task and carries it out .","label":6,"label_text":"OTH"} +{"text":"Thus what we are evaluating is the extent to which these algorithms suffice to narrow the search of an inference component .","label":5,"label_text":"OWN"} +{"text":"The results for the three languages turn out to be quite different , and the general conclusion ( which is the overall contribution of the paper ) will be that the external criterion should be the one to dominate tagset design : there is a limit to how knowledge-free we can be .","label":5,"label_text":"OWN"} +{"text":"Actually , such gender-ambiguous words are not very frequent .","label":5,"label_text":"OWN"} +{"text":"Although many of the classes acquired result from the accumulation of incorrect senses ( 73.3 % ) , it seems that their size tends to be smaller than classes in other categories , as they only contain a 51.4 % of the senses .","label":5,"label_text":"OWN"} +{"text":"We present some variations affecting the association measure and thresholding on a technique for learning Selectional Restrictions from on-line corpora .","label":2,"label_text":"AIM"} +{"text":"For the prototype in which users can process their own text , the net was trained on the whole corpus , slightly augmented .","label":5,"label_text":"OWN"} +{"text":"Second , there is a class of techniques for learning rules from text , a recent example beingBrill 1993.","label":4,"label_text":"BKG"} +{"text":"The scientific questions arise in connection to distributional views of linguistic ( particularly lexical ) structure and also in relation to the question of lexical acquisition both from psychological and computational learning perspectives .","label":4,"label_text":"BKG"} +{"text":"Some of the methods we could use for assessing experimentally the accomplishment of these criteria would be :","label":5,"label_text":"OWN"} +{"text":"Thus , eachbecomes a set of links :, whereis a link with thickness.","label":5,"label_text":"OWN"} +{"text":"This is clear for, but it also holds forif we want to interpret a sentence like a man stole a bike as","label":5,"label_text":"OWN"} +{"text":"( See Figure. ) .","label":5,"label_text":"OWN"} +{"text":"We empirically compared the performance of our method based on the MDL Principle against the Maximum Likelihood Estimator in word clustering , and found that the former outperforms the latter .","label":1,"label_text":"CTR"} +{"text":"This is where word-POS guessers take their place -- they employ the analysis of word features , e.g. word leading and trailing characters , to figure out its possible POS categories .","label":4,"label_text":"BKG"} +{"text":"This paper discusses the lexicographical concept of lexical functionsMel'cuk and Zolkovsky 1984and their potential exploitation in the development of a machine translation lexicon designed to handle collocations .","label":2,"label_text":"AIM"} +{"text":"The fragmentation feature was given a weight of 1.1 , to prefer skipping a word if it reduces the fragmentation count by at least one .","label":5,"label_text":"OWN"} +{"text":"The representation of belief in these models has been binary ;","label":6,"label_text":"OTH"} +{"text":"The Narration relation orders the times in forward progression in passageand the Explanation relation orders them in backward progression in passage.","label":5,"label_text":"OWN"} +{"text":"Suppose , for example , we thought that the VP ` ate a peach ' should be interpreted as :","label":5,"label_text":"OWN"} +{"text":"More recently , we have constructed similar tables with the help of a statistical part-of-speech taggerChurch 1988and of tools for regular expression pattern matching on tagged corporaYarowsky 1992.","label":3,"label_text":"BAS"} +{"text":"We employ the ` simulated annealing technique ' to deal with this problem .","label":3,"label_text":"BAS"} +{"text":"which is a derivative ofKendall'sthatDietz 1983empirically found particularly accurate as a test statistic for comparing distance matrices .","label":5,"label_text":"OWN"} +{"text":"( But note that since the time taken for parse forest unpacking is not included in parse times , the latter do not vary by such a large magnitude ) .","label":5,"label_text":"OWN"} +{"text":"It therefore seems that the deleted words must have the same major syntactic category , and the same lexical meaning .","label":4,"label_text":"BKG"} +{"text":"Everyone in P has reason to believe thatholds .","label":6,"label_text":"OTH"} +{"text":"The substitutions are represented using the notation `' .","label":5,"label_text":"OWN"} +{"text":"In this section , we describe how the parameters of our grammar , the probabilities associated with each grammar rule , are set .","label":0,"label_text":"TXT"} +{"text":"The correspondence between surface and lexical strings for an entire word is licensed if there is a partitioning of both so that each partition ( pair of corresponding surface and lexical targets ) is licensed by a rule , and no partition breaks an obligatory rule .","label":5,"label_text":"OWN"} +{"text":"If one assumes a pairwise linking from left to right then the links between the two trees can be omitted .","label":4,"label_text":"BKG"} +{"text":"Space limitations force us to abstract over the recursive optimization of the rules defining the right-hand side categories through considering only the defining lexical entries .","label":5,"label_text":"OWN"} +{"text":"However , once the preposition with has been attached , the required N ' node will no longer be accessible , and a conscious garden path effect will be predicted , which , intuitively , does not occur .","label":5,"label_text":"OWN"} +{"text":"Once equilibrium is reached , the current transcription is set to be the best transcription found so far , and the search continues .","label":6,"label_text":"OTH"} +{"text":"The structure of this paper is as follows : Sectionexplains the knowledge for structure analysis of compound nouns and the procedures to acquire it from a corpus , Sectiondescribes the analysis algorithm , and Sectiondescribes the experiments that are conducted to evaluate the performance of our method , and Sectionsummarizes the paper and discusses future research directions .","label":0,"label_text":"TXT"} +{"text":"A proper branch is a set of three nodes -- a mother and two daughters -- which are constructed by the parser , using a simple mechanism such as a shift-reduce interpreter , and then ` licensed ' by the principles of grammar .","label":6,"label_text":"OTH"} +{"text":"P Person : identified as 1 st to 6th in the tagset .","label":5,"label_text":"OWN"} +{"text":"the input state for evaluation of John will buy it right away is the output state from the antecedent a car impresses him .","label":6,"label_text":"OTH"} +{"text":"We handled this problem by partial executionPereira and Shieber 1987of the filler-head rule .","label":5,"label_text":"OWN"} +{"text":"We used the heuristics to reduce ambiguities in segmentation , but ambiguities may remain .","label":5,"label_text":"OWN"} +{"text":"The implementation consists of five modules :","label":5,"label_text":"OWN"} +{"text":"A split of evaluation efforts into quantitative versus qualitative is incoherent .","label":5,"label_text":"OWN"} +{"text":"is an effective algorithm , and for each feature structure F ,a list of the resolvants of F .","label":5,"label_text":"OWN"} +{"text":"Directional adjacent combination :","label":5,"label_text":"OWN"} +{"text":"Most language processing labeled as statistical involves associating real-number valued parameters to configurations of symbols .","label":4,"label_text":"BKG"} +{"text":"Clearly , if the expected pattern is initial maximum , we should not use BW at all , if early maximum , we should halt the process after a few iterations , and if classical , we should halt the process in a `` standard '' way , such as comparing the perplexity of successive models .","label":5,"label_text":"OWN"} +{"text":"Sectiondescribes three unification-based parsers which are related to polynomial-complexity bottom-up CF parsing algorithms .","label":0,"label_text":"TXT"} +{"text":"Vocalisation","label":4,"label_text":"BKG"} +{"text":"We distinguish 4 categories on the basis of which kind of `` location '' they intrinsically refer to .","label":5,"label_text":"OWN"} +{"text":"The goals that are relevant for bottom-up Earley deduction are called waiting goals because they wait until they are activated by a unit clause that unifies with the goal .","label":5,"label_text":"OWN"} +{"text":"A potential final design plan negotiated via a dialogue is shown in figure.","label":6,"label_text":"OTH"} +{"text":"Our heuristic uses a set of features by which each of the parse candidates can be evaluated and compared .","label":5,"label_text":"OWN"} +{"text":"This is now illustrated with a more interesting example ( adapted fromHirshbhleras cited byDalrymple et al.) .","label":4,"label_text":"BKG"} +{"text":"Note that ` arg 2 ' and ` from ' should also be considered dependent via ` to ' but to a somewhat weaker degree .","label":6,"label_text":"OTH"} +{"text":"does not require the use of higher-order unification for dealing with quantifiers .","label":5,"label_text":"OWN"} +{"text":"For training from a hand-tagged corpus , the model is estimated by counting the number of transitions from each tag i to each tag j , the total occurrence of each tag i , and the total occurrence of word w with tag i .","label":5,"label_text":"OWN"} +{"text":"sanctions the spreading of the first vowel .","label":5,"label_text":"OWN"} +{"text":"In the results that follow , we will identify tagset that include a given distinction with an uppercase letter and ones that do not with a lowercase letter ; for example G for a tagset that marks gender , and g for one that does not .","label":5,"label_text":"OWN"} +{"text":"Uncountable and pluralia tantum nouns in denumerated environments are translated as the prepositional complement of a classifier .","label":5,"label_text":"OWN"} +{"text":"It does so by gathering ( in cand-set ) evidence proposed by the user as direct support for _bel but which was not accepted by the system and which the system predicts it can successfully refute ( i.e. , _bel.focus is not nil ) .","label":5,"label_text":"OWN"} +{"text":"This indicates that the focus of modification could be eitherTeaches(Smith,AI) or On-Sabbatical(Smith,next year) ( since the evidential relationship between them was accepted ) .","label":5,"label_text":"OWN"} +{"text":"In sectionwe analyze some data about the performance of an experiment run in a Unix machine , on a corpus of 800,000 words .","label":0,"label_text":"TXT"} +{"text":"Whereas , as for the editorials , the average length ratio ( abstract \/ original ) was 30 % , and the coverage of the key sentence and the most important key sentence were 41 % and 60 % respectively .","label":5,"label_text":"OWN"} +{"text":"We found that in each dialogue , the client initiates all the topics before the central shift , whereas the expert initiates the later ones .","label":5,"label_text":"OWN"} +{"text":"The algorithm in sectioncan analyze any input string with the least number of errors .","label":6,"label_text":"OTH"} +{"text":"The top-down presentation operators are roughly divided into two categories :","label":5,"label_text":"OWN"} +{"text":"Allowing categories on the stack to themselves have non-empty stacks moves the formalism one step further from being an indexed grammar .","label":5,"label_text":"OWN"} +{"text":"Recently ,Murata and Nagao 1993have proposed a method of determining the referentiality property and number of nouns in Japanese sentences for machine translation into English , but the research has not yet been extended to include the actual English generation .","label":1,"label_text":"CTR"} +{"text":"Using Bayes 's formula , we have","label":5,"label_text":"OWN"} +{"text":"D3 All lexical probabilities have the same value , so that the lexicon contains no information other than the possible tags for each word .","label":5,"label_text":"OWN"} +{"text":"LHIP provides a number of ways of applying a grammar to input .","label":5,"label_text":"OWN"} +{"text":"This says that the relationship simple holds between some past instant A and the property of being a certain sort of event .","label":5,"label_text":"OWN"} +{"text":"These improvements haven't been implemented yet and will be the object of further works .","label":5,"label_text":"OWN"} +{"text":"Assertions - declarative utterances which were used to state facts .","label":5,"label_text":"OWN"} +{"text":"We call this set the subtree projection of that lexical category .","label":5,"label_text":"OWN"} +{"text":"Elman 1990trains a connectionist net to predict words , a process that generates internal representations that reflect grammatical category .","label":6,"label_text":"OTH"} +{"text":"Therefore , they lack linguistic clues and the system cannot extract the rhetorical structure exactly .","label":5,"label_text":"OWN"} +{"text":"For example , we can restrict the attention to DCGs of which the context-free skeleton does not contain cycles .","label":6,"label_text":"OTH"} +{"text":"kleene star is used only in a very limited context ( for the analysis of coordination ) ,","label":5,"label_text":"OWN"} +{"text":"The advantages of the latter are clear .","label":6,"label_text":"OTH"} +{"text":"No clash of temporal relations is predicted by our account , because the use of the simple pasts do not in themselves imply a specific ordering between them .","label":5,"label_text":"OWN"} +{"text":"While passagecould be understood as an Explanation on semantic grounds , the hearer assumes Narration since no other relation is cued .","label":5,"label_text":"OWN"} +{"text":"This is no doubt to be explained by the fairly small number of mapped concepts on which the distance metrics are based ( 51 ) .","label":5,"label_text":"OWN"} +{"text":"As it is not always possible to explicitly determine this when translating from Japanese to English , we divide these nouns into two groups : ` strongly countable ' , those that are more often used to refer to discrete entities , such as cake , and ` weakly countable ' , those that are more often used to refer to unbounded referents , such as beer .","label":4,"label_text":"BKG"} +{"text":"There is no linguistic justification .","label":1,"label_text":"CTR"} +{"text":"Clark and Wilkes-Gibbsdeveloped the following process model to explain their findings .","label":6,"label_text":"OTH"} +{"text":"manmosu `` mammoth '' is fully countable so the generic noun phrase is translated as a bare plural .","label":5,"label_text":"OWN"} +{"text":"In compilation , one may compose any or all of","label":4,"label_text":"BKG"} +{"text":"We will assign less error values () to the insertion-error hypothesis edges of nonterminals which are embraced by comma or parenthesis .","label":5,"label_text":"OWN"} +{"text":"Furthermore , letand.","label":5,"label_text":"OWN"} +{"text":"Results are computed with and without tuning factors suggested in the literature .","label":5,"label_text":"OWN"} +{"text":"Our experimental result shows that employing the combined method does increase the coverage of disambiguation .","label":5,"label_text":"OWN"} +{"text":"In processing a sentence using a lexicalised formalism we do not have to look at the grammar as a whole , but only at the grammatical information indexed by each of the words .","label":5,"label_text":"OWN"} +{"text":"In the second experiment , each training sentence and each test sentence hypothesis was analysed by the Core Language EngineAlshawi 1992trained on the ATIS domainAgns et al. 1994.","label":3,"label_text":"BAS"} +{"text":"The two taggers have the same tokeniser and morphological analyser .","label":6,"label_text":"OTH"} +{"text":"This suggests that the predictive power of neighbors beyond the closest 30 or so can be modeled fairly well by the overall frequency of the conditioned word .","label":5,"label_text":"OWN"} +{"text":"As we saw earlier : even for CFG it holds that there can be an infinite number of analyses for a given FSA ( but in the CFG this of course does not imply undecidability ) .","label":1,"label_text":"CTR"} +{"text":"There is a clear route to a more finely grained account if we allow the expansion probabilities to be conditioned on surrounding context .","label":5,"label_text":"OWN"} +{"text":"For the first processing stage we need to place the subject markers , and , as a further task , disambiguate tags .","label":5,"label_text":"OWN"} +{"text":"The addition of these modifications changes the quantitative results .","label":5,"label_text":"OWN"} +{"text":"Semantic understanding is necessary to distinguish between the states described by phrases of the form `` to be adjective '' and the processes described by phrases of the form `` to be past participle '' .","label":5,"label_text":"OWN"} +{"text":"The remaining 4 % require knowledge from outside the sentence being translated .","label":5,"label_text":"OWN"} +{"text":"As for lhip_success , but lists only the most specific successful rules ( i.e. those which have themselves succeeded but whose results have not been used elsewhere ) .","label":5,"label_text":"OWN"} +{"text":"and","label":5,"label_text":"OWN"} +{"text":"The isogloss approach groups Manx as a cousin of the Scottish dialects , and the phonetic approach makes it a cousin of the Irish dialects , but in both cases the s of Manx is very small ( less than 0.06 ) , making it essentially intermediate between the two groups .","label":5,"label_text":"OWN"} +{"text":"For example , ATNs were adapted byWoods 1973, DCGs byDahl and McCord 1983, and chart parsers byHaugeneder 1992.","label":6,"label_text":"OTH"} +{"text":"In the latter work ,McCandlessuses a heuristic search procedure similar to ours , but a very different search criteria .","label":6,"label_text":"OTH"} +{"text":"Conversely , more detail in the tagset may help the tagger when the properties of two adjacent words give support to the choice of tag for both of them ; that is , the transitions between tags contribute the information the tagger needs .","label":4,"label_text":"BKG"} +{"text":"In LFG , the syntax \/ semantics interface is more loosely coupled , affording the flexibility to handle coordinated and non-coordinated cases of RNR uniformly in the semantics .","label":5,"label_text":"OWN"} +{"text":"In short , the second strategy is a ` divide and conquer ' strategy : treat structural issues using model theoretic tools , and procedural issues with ( revealing ) computational tools .","label":5,"label_text":"OWN"} +{"text":"Furthermore , a similar relation holds betweenfor two empirical distributions p and p ' and the probability that p and p ' are drawn from the same distribution q .","label":5,"label_text":"OWN"} +{"text":"We can regard the whole sentence structure as more grammatical if the sum of these unification costs is smaller .","label":5,"label_text":"OWN"} +{"text":"There are both theoretical and practical motivations .","label":4,"label_text":"BKG"} +{"text":"In other words , A may decide that it is easier to just say the warrant rather than require B to infer or retrieve it .","label":5,"label_text":"OWN"} +{"text":"The notion of felicitously defeasible information is meant to capture the inferences that can be cancelled without any abnormality , as in :","label":4,"label_text":"BKG"} +{"text":"Let us introduce some terminology .","label":5,"label_text":"OWN"} +{"text":"Due to the `` context-freeness '' of PLPATR , new entries can be added to the compatibility array in a bottom-up manner based on existing entries without the need to reconstruct complete feature structures .","label":5,"label_text":"OWN"} +{"text":"One is forced to generate all possible natural language expressions licensed by the grammar and subsequently check them against the start category .","label":6,"label_text":"OTH"} +{"text":"It then outlines the neural net selection process .","label":5,"label_text":"OWN"} +{"text":"SYSCONJ does not immediately merge the two stack configurations after completing the second conjunct , but , instead , separately parses both conjuncts in parallel until a constituent is completed .","label":6,"label_text":"OTH"} +{"text":"Sensitivity to native-language phonotactics in 9-month - olds was recently reported byJusczyk et al. 1993a.","label":6,"label_text":"OTH"} +{"text":"Proof .","label":5,"label_text":"OWN"} +{"text":"In the case of Ireland , everyone agrees that Gaelic is nowadays found in three main dialects : that of Ulster , that of Connacht , and that of MunsterSiadhail 1989.","label":4,"label_text":"BKG"} +{"text":"Lexicalist approaches to MT , particularly those incorporating the technique of Shake-and-Bake generationBeaven 1992a,Beaven 1992b,Whitelock 1994, combine the linguistic advantages of transferArnold et al. 1988,Allegranza et al. 1991and interlingualNirenburg et al. 1992,Dorr 1993approaches .","label":6,"label_text":"OTH"} +{"text":"A comparison of the two algorithms on each data set individually and an overall analysis on the three data sets combined revealed no significant differences in the performance of the two algorithms (, not significant ) .","label":5,"label_text":"OWN"} +{"text":"Lauer and Dras 1994suggest two improvements to the method used above .","label":6,"label_text":"OTH"} +{"text":"The incorporation of unification into the CE parser follows the methodology developed for unification-based LR parsing described in the previous section : a table is computed from a CF ` backbone ' , and a parser , augmented with on-line unification and feature-based subsumption operations , is driven by the table .","label":5,"label_text":"OWN"} +{"text":"For example , the data in Figurecan be generated by a word-based model , and the data in Figureby a class-based model .","label":5,"label_text":"OWN"} +{"text":"Full-scale bag generation is not necessary because sufficient information can be transferred from the source language to severely constrain the subsequent search during generation .","label":6,"label_text":"OTH"} +{"text":"Nevertheless , this relationship should be further explored in future work .","label":5,"label_text":"OWN"} +{"text":"Similarity between words is computed by spreading activation on Paradigme .","label":5,"label_text":"OWN"} +{"text":"If some anaphora are left unresolved , apply the anaphora module again .","label":5,"label_text":"OWN"} +{"text":"If significant human intervention is needed to provide the biasing , then the advantages of automatic training become rather weaker , especially if such intervention is needed on each new text domain .","label":1,"label_text":"CTR"} +{"text":"Firstly , for each word type in the corpus we can collect the transitions with which it occurs and calculate its probability distribution over all possible transitions ( an infinite number of which will be zero ) .","label":5,"label_text":"OWN"} +{"text":"However , the structure for TNT deliver after 5 pm in Edinburgh requires one S node and three VP nodes ( or three S nodes and one VP node ) .","label":1,"label_text":"CTR"} +{"text":"The gene pool is renewed each generation , and the number of generations is another search parameter .","label":6,"label_text":"OTH"} +{"text":"For example , if people tend to write sentences with inserted phrases , then the parametermust increase .","label":5,"label_text":"OWN"} +{"text":"In considering ways of extending LIG , this paper has introduced the notion of partial linearity and shown how it can be manifested in the form of a constrained unification-based grammar formalism .","label":2,"label_text":"AIM"} +{"text":"If they are realized with question intonation , the inference of acceptance is blocked .","label":5,"label_text":"OWN"} +{"text":"We have also described two forms of discourse inference , namely Common Topic inference and Coherent Situation inference .","label":5,"label_text":"OWN"} +{"text":"However , when a right subtree is a word such as suffixes , this assumption does not always hold true .","label":5,"label_text":"OWN"} +{"text":"if the latter is more acceptable , build [] first ;","label":6,"label_text":"OTH"} +{"text":"Since it will combine with brown dog , no adjunction to a lower TNCB is attempted .","label":5,"label_text":"OWN"} +{"text":"For every possible category that can be conjoined , a separate lexical entry for and is required , and","label":1,"label_text":"CTR"} +{"text":"Despite all the complexities that individualize the recognition stage for each of these inferences , all of them can be defeated by context , by knowledge , beliefs , or plans of the agents that constitute part of the context , or by other pragmatic rules .","label":4,"label_text":"BKG"} +{"text":"This may have to do with how the categories are chosen and defined .","label":5,"label_text":"OWN"} +{"text":"In order to obtain a generator similar to the bottom-up generator as described inShieber 1988the compilation process can be modified such that only lexical entries are extended with magic literals .","label":6,"label_text":"OTH"} +{"text":"In fact , the corpus shows that we are processing included segments ( via quotes and parenthesis ) and there are no grammatical words and few clues to rely on .","label":5,"label_text":"OWN"} +{"text":"An action is obligatory if it is not permissible not to do it .","label":5,"label_text":"OWN"} +{"text":"Selection of the most appropriate subset of the candidate space to convey the SRs , taking into account that the final classes must be mutually disjoint .","label":5,"label_text":"OWN"} +{"text":"In this section , we elaborate on the merits of our method .","label":0,"label_text":"TXT"} +{"text":"identifying genres , and","label":4,"label_text":"BKG"} +{"text":"Hence , the partial syntax tree given in Fig.,","label":6,"label_text":"OTH"} +{"text":"One approach is to encode the salient properties in a static hierarchy asDavis 1989, andReiter and Dale 1992have done .","label":6,"label_text":"OTH"} +{"text":"The backward index of edge 2 is therefore identified with the forward index of edge 1 .","label":5,"label_text":"OWN"} +{"text":"However , since the number of tags with better and worse performance is about the same ( 7 and 5 ) , one cannot conclude with certainty that generalized context vectors induce tags of higher quality .","label":5,"label_text":"OWN"} +{"text":"Proof .","label":5,"label_text":"OWN"} +{"text":"This research was partly funded by the Defence Research Agency , Malvern , UK , under Strategic Research Project M2YBT44X .","label":5,"label_text":"OWN"} +{"text":"This is the wrong result ; in a sentence such as ` Hillary wanted , found , and supported two candidates ' , the desired result is where one quantifier scopes over both extensional verbs ( that is , Hillary found and supported the same two candidates ) , just as in the case where all the verbs are extensional .","label":1,"label_text":"CTR"} +{"text":"Researchers have studied the analysis and generation of argumentsBirnbaum et al. 1980,Reichman 1981,Cohen 1987,Sycara 1989,Quilici 1992,Maybury 1993; however , agents engaging in argumentative dialogues are solely interested in winning an argument and thus exhibit different behavior from collaborative agents .","label":1,"label_text":"CTR"} +{"text":"The span of a grammar rule R is the length of the longest islandsuch that terminalsandare both consumed ( directly or indirectly ) by R .","label":5,"label_text":"OWN"} +{"text":"This reference time is represented asin the top sub-DRS .","label":6,"label_text":"OTH"} +{"text":"The total length of all the words in the lexicon is the sum of this formula over all lexical items :","label":5,"label_text":"OWN"} +{"text":"The most general of these is that even quite crude corpus statistics can provide information about the syntax of compound nouns .","label":5,"label_text":"OWN"} +{"text":"considering the coverage of the WordNet taxonomy regarding the noun senses appearing in Treebank , and","label":0,"label_text":"TXT"} +{"text":"For example , it would be possible , by providing a suitable definition for set_dynamic_threshold \/ 2 , to set T to 0.5 when more than one optional adjective has been found , and 0.9 otherwise .","label":5,"label_text":"OWN"} +{"text":"Our approach uses a similar concept , but differs in that embedded syntactic constituents are detected one at a time in separate steps .","label":5,"label_text":"OWN"} +{"text":"Distinctions of this sort are usually found only in corpora such as Susanne which are parsed as well as tagged .","label":4,"label_text":"BKG"} +{"text":"From these static values , the dynamic inherited : slash-values ( feature abbreviated as \/\/ ) can be calculated during generation , see rule ( lex ) in Fig..","label":5,"label_text":"OWN"} +{"text":"Solutions with LSE are not necessarily the same as minimising the number of misclassifications , and for certain types of data this second method of direct training may be appropriate .","label":5,"label_text":"OWN"} +{"text":"This robust parser can easily be scaled up and applied to various domains because this parser depends only on syntactic factors .","label":5,"label_text":"OWN"} +{"text":"Collaborative negotiation occurs when conflicts arise among agents developing a shared plan during collaborative planning .","label":4,"label_text":"BKG"} +{"text":"The intuitive interpretation that, is expected by our constraints :of Tableandof Table.","label":5,"label_text":"OWN"} +{"text":"Task structure in the pump dialogues is an important factor especially as it relates to the use of global focus .","label":5,"label_text":"OWN"} +{"text":"We regard Paradigme as a field for the interaction between text and episodes in memory -- the interaction between what one is hearing or reading and what one knowsSchank 1990.","label":5,"label_text":"OWN"} +{"text":"Two sources of information that might aid speech segmentation are : distribution -- the phoneme sequence in cat appears frequently in several contexts including thecat , cats and catnap , whereas the sequence in catn is rare and appears in restricted contexts ; and phonotactics -- cat is an acceptable syllable in English , whereas pcat is not .","label":4,"label_text":"BKG"} +{"text":"How can the collaborative planning principles highlight the differences we observe ?","label":5,"label_text":"OWN"} +{"text":"The equations presented above for the dependency model differ from those developed inLauer and Dras 1994in one way .","label":1,"label_text":"CTR"} +{"text":"The use we make of lexical functions as interlingual representations , does not respect their originalMel'cuk-ian interpretation .","label":1,"label_text":"CTR"} +{"text":"Methods for the automatic compilation of rules from a notation convenient for the rule-writer into finite-state automata have also been developed , allowing the efficient analysis and synthesis of word forms .","label":6,"label_text":"OTH"} +{"text":"The invocation of an ordering operator is always followed by the invocation of a splitting operator , which actually posts subgoals by calling the function Present with the ordered goals subsequently .","label":5,"label_text":"OWN"} +{"text":"Consider now utterance.","label":5,"label_text":"OWN"} +{"text":"Note : we do not care about frequencies of word collocations prior to replacing words with thesaurus categories .","label":5,"label_text":"OWN"} +{"text":"As an example , assume that.","label":5,"label_text":"OWN"} +{"text":"The tableau yields two model schemata ( see figure) ; in both of them , it is defeasibly inferred that Mary came to the party .","label":5,"label_text":"OWN"} +{"text":"Yet , typically agents will still respond in such situations .","label":4,"label_text":"BKG"} +{"text":"Languages with a rich morphology may be more difficult than English since with fewer tokens per type , there is less data on which to base a categorization decision .","label":5,"label_text":"OWN"} +{"text":"In our simulated annealing algorithm , we could alternatively employ the Maximum Likelihood Estimator ( MLE ) as criterion for the best probabilistic model , instead of MDL .","label":6,"label_text":"OTH"} +{"text":"Unless a keyboard user is particularly proficient , a frustrating amount of time is usually spent backtracking to pick up mis-typed or otherwise mistaken input .","label":4,"label_text":"BKG"} +{"text":"is a total function from Q to, and","label":5,"label_text":"OWN"} +{"text":"LHIP grammars are an extended form of Prolog DCG grammars .","label":5,"label_text":"OWN"} +{"text":"The higher the coverage , the better the technique succeeds in correctly generalizing more of the input examples .","label":5,"label_text":"OWN"} +{"text":"It is very interesting that parameters of heuristics reflect the characteristics of the test corpus .","label":5,"label_text":"OWN"} +{"text":"These results will be useful in the next section .","label":5,"label_text":"OWN"} +{"text":"use word collocations with all words is assigned a single category .","label":5,"label_text":"OWN"} +{"text":"For Judge 1 , there were 99 test instances with sufficiently high confidence to be considered .","label":5,"label_text":"OWN"} +{"text":": source language relation graph","label":5,"label_text":"OWN"} +{"text":"Only the magic part of the abstract unfolding tree is represented .","label":5,"label_text":"OWN"} +{"text":"The bigram similarity model was also tested as a language model in speech recognition .","label":5,"label_text":"OWN"} +{"text":"Also , whereasPark 1992requires careful consideration of handling of determiners with coordination , here such sentences are handled just like any others .","label":5,"label_text":"OWN"} +{"text":"He performed a best-first search on the candidate space .","label":6,"label_text":"OTH"} +{"text":"The system considers this ( invoking the domain plan reasoner to search the plan for problems or incomplete parts ) and decides that the plan will work , and so decides to perform the requested action - an evaluation speech act .","label":5,"label_text":"OWN"} +{"text":"An example of problematic complement displacement taken from our test-grammar is given in figure( see next page ) .","label":5,"label_text":"OWN"} +{"text":"Consider the following simple definition of an HPSG , with the recursive definition of the predicate sign\/1 .","label":6,"label_text":"OTH"} +{"text":"root - the node is the root of the tree .","label":5,"label_text":"OWN"} +{"text":"An extragrammatical sentence is what a normal parser fails to analyze .","label":4,"label_text":"BKG"} +{"text":"This might be expected since the corresponding finite state automaton is not determinised -- to avoid theoretical exponential time complexity on grammar size -- thus paying a price at run time .","label":5,"label_text":"OWN"} +{"text":"Furthermore we discuss approaches to cope with the problem .","label":5,"label_text":"OWN"} +{"text":"The two terms of Assoc try to capture different properties :","label":6,"label_text":"OTH"} +{"text":"Local focus is the intermediate conclusion last presented , while the semantic objects involved in the local focus are called the focal centers .","label":5,"label_text":"OWN"} +{"text":"The choice between the use of raced as the main verb , or as part of the reduced relative is usually assumed to be within the fragment the horse raced , suggesting that there are two distinguished parsing states after raced .","label":5,"label_text":"OWN"} +{"text":"= 0.502510 ( coherent ) ,","label":5,"label_text":"OWN"} +{"text":"whereis thickness of the j-th subrfrant of.","label":5,"label_text":"OWN"} +{"text":"There has already been some early work done on providing statistically based parsing using transitions between recursively structured syntactic categoriesTugwell 1995.","label":5,"label_text":"OWN"} +{"text":"In these experiments the scheme was applied to the grammar of a version of the SRI Core Language EngineAlshawi 1992adapted to the ATIS domain for a speech-translation taskRayner et al. 1993and large corpora of real user data collected using Wizard-of-Oz simulation .","label":6,"label_text":"OTH"} +{"text":"N Number : singular , plural .","label":5,"label_text":"OWN"} +{"text":"The word stray probably should be excluded also , since it most likely appears on this list as an adjective ( as in `` stray bullet '' ) .","label":6,"label_text":"OTH"} +{"text":"Knowing now that the addition of a stack-valued feature suffices to capture the basic hierarchical structure of language , additional features can be used to deal with other syntactic relations .","label":5,"label_text":"OWN"} +{"text":"The objects involved in the overall model are as follows ( we omit target speech synthesis under the assumption that it proceeds deterministically from a target language word string ) :","label":5,"label_text":"OWN"} +{"text":"Sussnagives as an example of the problem he is solving the following paragraph from the corpus of 1963 Time magazine articles used in information retrieval research ( uppercase in the Time corpus , lowercase here for readability ; punctuation is as it appears in the original corpus ) :","label":6,"label_text":"OTH"} +{"text":"ForKamp, strict identity involves copying the discourse referent of the antecedent and identifying it with that of the ellided pronoun .","label":6,"label_text":"OTH"} +{"text":"Brennan et al.can add the VP and the S onto the end of the forward centers list , asSidnerdoes in her algorithm for local focusingSidner 1979.","label":6,"label_text":"OTH"} +{"text":"So , ifis defined then the denotation of attributeacts upon each object in the denotation of typeto yield an object in the denotation of type.","label":5,"label_text":"OWN"} +{"text":"Regrettably , the LL data was only available from isolated disyllables , and other sequences such as LH and HH were not available at all .","label":5,"label_text":"OWN"} +{"text":"can be associated with a semantic representation ,x. likes ( john , x ) .","label":6,"label_text":"OTH"} +{"text":"We adopt a suggestion byChierchiainPartee 1984, that the whole implication be rendered as a state .","label":3,"label_text":"BAS"} +{"text":"Notice , however , that the choice of back-off or interpolation is independent from the similarity model used .","label":6,"label_text":"OTH"} +{"text":"However , the VP is not in a suitable form , as the object has been abstracted out of it ( yielding a trace assumption ) .","label":5,"label_text":"OWN"} +{"text":"In the cue words approach ,Reichman 1985has claimed that phrases like `` because '' , `` so '' , and `` but '' offer explicit information to listeners about how the speaker 's current contribution to the discourse relates to what has gone previously .","label":6,"label_text":"OTH"} +{"text":"Translation models that exclusively code contrastive ( cross-linguistic ) information .","label":5,"label_text":"OWN"} +{"text":"In this sense , the constraints in the main clause can be treated as almost local constraints of the main clause .","label":5,"label_text":"OWN"} +{"text":"Passageis likewise coherent by virtue of the inferences resulting from identifying parallel elements and properties , including that John is a young aspiring politician and that he 's a Democrat ( since Clinton is identified with his party 's candidate ) .","label":5,"label_text":"OWN"} +{"text":"Is such additional complexity really needed .","label":5,"label_text":"OWN"} +{"text":"Begin at the NP node immediately dominating the pronoun in the parse tree of S .","label":6,"label_text":"OTH"} +{"text":"Secondly , the error analysis suggests that considering non-local dependencies would improve results .","label":5,"label_text":"OWN"} +{"text":"We also examined how utterance type related to topic shift and found that few interruptions introduced a new topic .","label":5,"label_text":"OWN"} +{"text":"I compared agglomeration to a top-down method thatKaufman and Rousseeuw 1990call partitioning around medoids .","label":1,"label_text":"CTR"} +{"text":"In addition , it cures a slight overgeneration problem inDalrymple et al.'s account .","label":1,"label_text":"CTR"} +{"text":"Our surface generator TAG-GENKilger 1994produces the utterance :","label":6,"label_text":"OTH"} +{"text":"In this paper , we only consider events and states , together termed eventualities inBach 1981.","label":2,"label_text":"AIM"} +{"text":"An arrow pointing fromtois called a subordination constraint and means that the formulamust not have wider scope than.","label":6,"label_text":"OTH"} +{"text":"For example , sometimes even fully countable nouns can be used in uncountable noun phrases .","label":5,"label_text":"OWN"} +{"text":"Accounts of various linguistic phenomena have been developed within the framework on which our extension is based , including quantifiers and anaphoraDalrymple et al. 1994a, intensional verbsDalrymple et al. 1994b, and complex predicatesDalrymple et al. 1993a.","label":3,"label_text":"BAS"} +{"text":"A typical bilingual postulate for translating betweenandmight be of the form :","label":5,"label_text":"OWN"} +{"text":"takusan-no does not carry this nuance so ALT-J \/ E will translate a noun phrase modified by it as mass - uncountable , and takusan-no as many if the head is countable and much otherwise .","label":5,"label_text":"OWN"} +{"text":"Although the usefulness of on-line semantic filtering during the processing of complete sentences is debatable , filtering has a more plausible role to play in interactive , real-time environments , such as interactive spell checkers ( see e.g.Wirn 1990for arguments for incremental parsing in such environments ) .","label":4,"label_text":"BKG"} +{"text":"Evaluating the default body , the system creates a V-object OBJ ' .","label":5,"label_text":"OWN"} +{"text":"Noun phrases modified by Japanese \/ English pairs that are translated as denumerators we call denumerated .","label":5,"label_text":"OWN"} +{"text":"We conclude by sketching a technique which does treat such structures .","label":0,"label_text":"TXT"} +{"text":"Indeed ,Groenendijk and Stokhofclaim that the compositional nature of Dynamic Predicate Logic enables one to `` interpret a text in an on-line manner , i.e. , incrementally , processing and interpreting each basic unit as it comes along , in the context created by the interpretation of the text so far '' .","label":6,"label_text":"OTH"} +{"text":"For this , first of all , we introduce a new pragmatic role called observer .","label":5,"label_text":"OWN"} +{"text":"In fact , as can be seen from Figure, before copying takes place there is no sentence-level semantics for gapped clauses at all .","label":5,"label_text":"OWN"} +{"text":"Vocalisms","label":4,"label_text":"BKG"} +{"text":"We have developed an automatic abstract generation system for Japanese expository writings based on rhetorical structure extraction .","label":2,"label_text":"AIM"} +{"text":"Any link that is still disabled is activated and initialised to 0 , so that tuples which have not occurred in the training corpus make no contribution to the classification task .","label":5,"label_text":"OWN"} +{"text":"For example , the adverb in `` Mc * N. Hester , CURRENTLY Dean of ... '' and the conjunction in `` to add that , IF United States policies ... '' have similar immediate neighbors ( comma , NP ) .","label":5,"label_text":"OWN"} +{"text":"Using the hierarchy in figurethe analyses of the five sentences from figureare as in figure.","label":5,"label_text":"OWN"} +{"text":"A method to draw the co-occurrence triples from corpus is proposed in subsection.","label":0,"label_text":"TXT"} +{"text":"Replacements correct pieces of information e.g.","label":4,"label_text":"BKG"} +{"text":"The method combines a constraint-based approach with an approach based on preferences : we exploit the HPSG type hierarchy and unification to arrive at a temporal structure using constraints placed on that structure by tense , aspect , rhetorical structure and temporal expressions , and we use the temporal centering preferences described byKameyama et al. 1993,Poesio 1994to rate the possibilities for temporal structure and choose the best among them .","label":3,"label_text":"BAS"} +{"text":"With a good lexicon but either degraded transitions or a test corpus differing from the training corpus , the pattern tends to be early maximum .","label":5,"label_text":"OWN"} +{"text":"However this type of strong evidence against the dead air hypothesis is left to future work .","label":5,"label_text":"OWN"} +{"text":"On a further 15 occasions ( 27 % ) , we found that the person in control of the dialogue signalled that they had no new information to offer .","label":5,"label_text":"OWN"} +{"text":"The bulk of LFG involves stating constraints about a single model , and is well equipped for this task , but constraining equations involve looking at the structure of other possible parse trees .","label":5,"label_text":"OWN"} +{"text":"We also see that l = 88 Hz and h = 96 Hz .","label":5,"label_text":"OWN"} +{"text":"Sometimes a noun phrase can be ambiguous , for example ` I like the elephant ' , where the speaker could like a particular elephant , or all elephants .","label":5,"label_text":"OWN"} +{"text":"This approach could be viewed as putting the cart before the horse ; the usefulness of stochastic information in parsers presumes that a certain level of accuracy can be achieved by the grammar alone .","label":5,"label_text":"OWN"} +{"text":"The standard approach to estimating an n-gram model is a two step process .","label":6,"label_text":"OTH"} +{"text":"We combine the antecedents and consequents of the foregoing formulae to yield :","label":5,"label_text":"OWN"} +{"text":"The problem with these approaches is that they assign a dual life to pragmatic inferences : in the initial stage , as members of a simple or complex utterance , they are defeasible .","label":1,"label_text":"CTR"} +{"text":"Yet intuitively , they are similar with respect to their right syntactic context despite the lack of common right neighbors .","label":5,"label_text":"OWN"} +{"text":"Hence the intersection question is undecidable too .","label":5,"label_text":"OWN"} +{"text":"If the probabilities differ , the local perplexity can be viewed as a generalized branching factor that takes this into account .","label":4,"label_text":"BKG"} +{"text":"This clearly demonstrates an extremely important consequence of using our dataflow analysis to compile a declarative grammar into a grammar optimized for generation .","label":5,"label_text":"OWN"} +{"text":"The maximum number of parses was 2736 for one 29-word sentence , but on the other hand some of even the longest sentences had fewer than ten parses .","label":5,"label_text":"OWN"} +{"text":"The same situation was detected withBrill's tagger which in general was slightly more accurate than the Xerox one .","label":5,"label_text":"OWN"} +{"text":"According to this axiom , if a set of paths G has meaning X , then for each R-relationthat has been introduced , a resourcecan be produced .","label":5,"label_text":"OWN"} +{"text":"However , the spelling rules make no reference to PRESENT _ 3 s ; it is simply a device allowing categories and logical forms for irregular words to be built up using the same production rules as for regular words .","label":5,"label_text":"OWN"} +{"text":"Like speech acts , PCAs can be defined in terms of the communicative goals they fulfill as well as their possible verbalizations .","label":5,"label_text":"OWN"} +{"text":"Therefore , lower values for k ( and also for t ) are computationally preferable .","label":5,"label_text":"OWN"} +{"text":"","label":5,"label_text":"OWN"} +{"text":"We would like to extend our method to use this information in the future .","label":5,"label_text":"OWN"} +{"text":"As in the word inventory column ( described above ) , the length of each code word is represented in a fixed-length field .","label":5,"label_text":"OWN"} +{"text":"To provide SUPPORT for this course of action he produces an inference that follows from what she has told him in, namely You 're only getting 1500 ( dollars ) a year .","label":5,"label_text":"OWN"} +{"text":"The embedding conditions for the whole construction are just like those for a regular ` if ' or ` every ' clause , i.e. the sentence is true , if every proper embedding of the antecedent box can be extended to a proper embedding of the combination of the antecedent and the consequent boxes .","label":6,"label_text":"OTH"} +{"text":"Both participants here believe at the outset that the expert has sufficient information about the situation and complete and correct knowledge about how to execute the Task .","label":5,"label_text":"OWN"} +{"text":"Therefore we don't deal with categoryin this paper .","label":5,"label_text":"OWN"} +{"text":"The selection of the most appropriate classes ( stage 3 ) is based on a global search through the candidates , in such a way that the final classes are mutually disjoint ( not related by hyperonymy ) .","label":6,"label_text":"OTH"} +{"text":"Firstly , consider conjuncts which correspond one to one in the categories of the corresponding words .","label":5,"label_text":"OWN"} +{"text":"One could define a weak notion of a semantic head which requires that the semantic form of the semantic head is a ( possibly empty ) substructure of the root semantics .","label":5,"label_text":"OWN"} +{"text":"It would seem to be more accurate to assign a greater distance to substitutions involving greater phonetic distinctions .","label":1,"label_text":"CTR"} +{"text":"Note that windowed counts are asymmetric .","label":5,"label_text":"OWN"} +{"text":"The amount of support contributed by a pairwise comparison is proportional to how informative the most informative subsumer is .","label":5,"label_text":"OWN"} +{"text":"To explore the relationship of control to planning , we compare the TODs with both types of ADs ( financial and support ) .","label":5,"label_text":"OWN"} +{"text":"It appears that the class of languages generated by PLTG is included in those languages generated by Linear Context-Free Rewriting SystemsVijay-Shanker et al. 1987since the construction involved in a proof of this underlies the recognition algorithm discussed in the next section .","label":5,"label_text":"OWN"} +{"text":"This method uses the information available in the original Japanese sentence along with information about English countability at both the noun phrase and noun level that can be stored in Japanese to English transfer dictionaries .","label":5,"label_text":"OWN"} +{"text":"This necessitates a dynamic processing strategy , i.e. , memoization , extended with an abstraction function like , e.g. , restrictionShieber 1985, to weaken filtering and a subsumption check to discard redundant results .","label":6,"label_text":"OTH"} +{"text":"The phonological or orthographic changes involved in affixation may be quite complex , so dimensioncan be large , and a feature mechanism may be needed to handle such varied but interrelated morphosyntactic phenomena such as umlautTrost 1991, case , number , gender , and different morphological paradigms .","label":4,"label_text":"BKG"} +{"text":"These theories are usually implemented in a language such as Prolog that can simulate- term operations with first-order unification .","label":6,"label_text":"OTH"} +{"text":"Macroplanning produces a sequence of PCAs .","label":5,"label_text":"OWN"} +{"text":"A word-by-word incremental parser for a lexicalised version of dependency grammarMilward 1992.","label":3,"label_text":"BAS"} +{"text":"modal operators for talking about trees","label":5,"label_text":"OWN"} +{"text":"Agents can both fail to access a belief that would allow them to produce an optimal plan , as well as make a mistake in planning if a belief about how the world has changed as a result of planning is not salient .","label":6,"label_text":"OTH"} +{"text":"Figureshows the results of these experiments for these three artificial models averaged over 10 trials .","label":5,"label_text":"OWN"} +{"text":"The training corpus consisted of the 4,279 sentences in the 5,873 - sentence set that were analysable and consisted of fifteen words or less .","label":5,"label_text":"OWN"} +{"text":"Most of the informants were over seventy years old and had not spoken Irish since their youth .","label":6,"label_text":"OTH"} +{"text":"Determine a threshold entropy that yields the desired coverage .","label":5,"label_text":"OWN"} +{"text":"This is postulated by syntactic descriptions like","label":6,"label_text":"OTH"} +{"text":"the set of affixes and their allowed combinations , and","label":4,"label_text":"BKG"} +{"text":"The sparseness problem for co-occurrence vectors is not serious in this case because each context consists of plural words .","label":5,"label_text":"OWN"} +{"text":"This decomposition ofcan be viewed as first deciding on the content of a sentence , formulated as a set of relation edges according to a statistical model for, and then deciding on word order according to.","label":5,"label_text":"OWN"} +{"text":"However , as theoperator is quite a convenient piece of syntax for capturing the effect of phrase structure rules , we have included it as a primitive in .","label":5,"label_text":"OWN"} +{"text":"For instance , the Japanese compound noun `` SinGataKansetuZei '' ( new indirect tax ) , producessegementations possibilities for this case ( by consulting a Japanese dictionary , we would filter out some ) .","label":4,"label_text":"BKG"} +{"text":"In sentences with two or more quantifiers , there is generally an ambiguity concerning which quantifier has wider scope .","label":4,"label_text":"BKG"} +{"text":"So far , we have not found these restrictions particularly problematic .","label":5,"label_text":"OWN"} +{"text":"For example , the eventual semantic structure may embed john at any depth e.g.","label":6,"label_text":"OTH"} +{"text":"Now , in section, we argued that no more than n - 1 rewrites would ever be necessary , thus the overall complexity of generation ( even when no solution is found ) is.","label":5,"label_text":"OWN"} +{"text":"The tag set used to annotate the English text is a slightly modified version of the Brown tag set , consisting of a total of 72 tags .","label":5,"label_text":"OWN"} +{"text":"Yarowsky 1992uses a fixed 100 word window to collect information used for sense disambiguation .","label":6,"label_text":"OTH"} +{"text":"Semitic languages employ a large number of diacritics to represent enter alia short vowels , doubled letters , and nunation .","label":4,"label_text":"BKG"} +{"text":"Then , an activated pattern P(w) is produced on Paradigme .","label":5,"label_text":"OWN"} +{"text":"Texts are more and more multilingual ( especially due to citations ) and we don't have enough tools to process them efficiently .","label":4,"label_text":"BKG"} +{"text":"There is however no firm agreement on just how to compute the distance matrices .","label":1,"label_text":"CTR"} +{"text":"The associated stringis empty .","label":5,"label_text":"OWN"} +{"text":"We are thus forced to under-filter and make an arbitrary choice between remaining alternatives .","label":5,"label_text":"OWN"} +{"text":"The figures include the degree of ambiguity , that is , the number of words in the corpus for which more than one tag was hypothesised .","label":5,"label_text":"OWN"} +{"text":"The grammar induction algorithms most successful in language modeling include the Inside-Outside algorithmLari and Young 1990,Lari and Young 1991,Pereira and Schabes 1992, a special case of the Expectation-Maximization algorithmDempster et al. 1977, and work byMcCandless 1993.","label":6,"label_text":"OTH"} +{"text":"Goals and clauses are associated with preference values that are intended to model the degree of confidence that a particular solution is the ` correct ' one .","label":5,"label_text":"OWN"} +{"text":"With the length of pre-subject extended to 15 words , and subject to 12 words , an average of 2 % are excluded ( 7 out of 351 ) .","label":5,"label_text":"OWN"} +{"text":"The two linguistically closest sites are grouped into one dialect , and thenceforth treated as a unit .","label":6,"label_text":"OTH"} +{"text":"In effect , the abstract specifications of the generation algorithms which we gave above , could be read as parsing algorithms , modulo a few changes ( of the success condition and the link relation ) .","label":5,"label_text":"OWN"} +{"text":"The experiment is intended to illustrate SPATTER 's ability to accurately parse a highly-ambiguous , large-vocabulary domain .","label":5,"label_text":"OWN"} +{"text":"An array of genes , P .","label":6,"label_text":"OTH"} +{"text":"An attempted solution to this problem is to impose restrictions on neighbouring cutnodes .","label":5,"label_text":"OWN"} +{"text":"In contrast , the conjunction although used before the third clause in exampleindicates a Coherent Situation relation .","label":5,"label_text":"OWN"} +{"text":"resulting in the sentence :","label":6,"label_text":"OTH"} +{"text":"Span :","label":5,"label_text":"OWN"} +{"text":"Hence , in general , the sister goals must be reordered according to the degree of instantiation of their semantic representations .","label":5,"label_text":"OWN"} +{"text":"combine modules from different systems .","label":4,"label_text":"BKG"} +{"text":"B instantiated to s","label":5,"label_text":"OWN"} +{"text":"This concept of reference time is no longer an instant of time , but rather , an interval .","label":6,"label_text":"OTH"} +{"text":"A summary consisted of concise reference to the entire set of information given about the client 's problem or the solution plan .","label":5,"label_text":"OWN"} +{"text":"However , the superiority of n-gram models in the part-of-speech domain indicates that to be competitive in modeling naturally-occurring data , it is necessary to model collocational information accurately .","label":1,"label_text":"CTR"} +{"text":"This third possibility is therefore to provide a syntactic correlate of lambda expressions .","label":6,"label_text":"OTH"} +{"text":"However , an unwelcome consequence of this approach , which appears to have gone unnoticed in the literature , arises in cases in which more than two verbs are conjoined .","label":1,"label_text":"CTR"} +{"text":"By repeated applications ofwe can write down the following expression for:","label":5,"label_text":"OWN"}