by B Alexieva · Cited by 38 — Bistra Alexieva. St. Kliment Ohridski University of Sofia, Bulgaria. 1. Introduction. An adequate retrieval of the content of the Source Language (SL) text in.
86 KB – 15 Pages
PAGE – 1 ============
UNDERSTANDING THE SOURCE LANGUAGE TEXT IN SIMULTANEOUS INTERPRETING By Bistra Alexieva St. Kliment Ohridski University of Sofia, Bulgaria 1. Introduction An adequate retrieval of the content of the Source Language (SL) text in Simultaneous Interpreting (SI) is crucial to the realisation of the communicative act, more crucial than in any other type of translation, for errors in this respect, as is well known, can hardly remain unnoticed. And this is the reason why one is tempted to revisit the issues relevant to the first phase and address them again, in the hope of finding answers to at least some of the questions related to: /a/ the specific textual parameters that may facilitate or hamper the comprehension of the SL text, and /b/ the contextual and situational factors that make it possible for the Simultaneous Interpreter (SIr) to grasp the content of the “running” SL text. This is all the more important because the SIr is not the Addressee (the intended recipient), and the Speaker, in building the text, cannot be expected to take into account the SIr’s knowledge of the conference topic, which is usually less than that of the real addressees (the conference participants). The SIr, as a rule, is not a member of the “discourse community” (Swales 1990: 23-28). 1.1. From amongst the issues raised under /a/ it seems that most important are the ones related to the nature of the SL text in terms of the following two major parameters: (i) the rate and clarity of its delivery, which has already been studied (e.g. Barik 1973, Shiryaev 1977) and (ii) the amount of information crammed into it, i.e. its semantic density, a problem which, to my knowledge, has not as yet been thoroughly explored. Therefore the focus here will be on issues related to the semantic density of the SL text as a major factor determining the ease or difficulty with which it lends itself to processing in an SI event. My major claims here will be that Œ The most powerful indicator of semantic density relevant to an SI situation is the text’s implicitness characteristics expressed through the ratio between the explicit and implicit predications (or propositions, PNs) constituting its content structure, because understanding a text means building or constructing predications (see Varantola 1980; Alexieva 1989; 1992; 1994) linking them into a coherent whole; and that brought to you by COREView metadata, citation and similar papers at core.ac.ukprovided by OpenstarTs
PAGE – 2 ============
Bistra Alexieva 46 Œ The explicit:implicit PN ratio can be employed for elaborating a more accurate procedure of measuring the comprehensibility, or listenability, of the SL text in SI, a procedure that may help us draw conclusions relevant to the theory, practice and didactics of SI (see Alexieva 1998). 1.2. Concerning the second series of questions arising around /b/ above, an attempt will be made to address only one of them and it is related to the SIr’s ability to cope with the task, irrespective of the fact that, as a rule, s/he is not a member of the discourse community (in the sense used by Swales 1990: 23-28, that is, a community of specialists). The claims I shall venture to make here are firstly, that it is the cumulative nature of SI as a process that ensures an increase in the feeling of familiarity of the SIr with the conference topic, thus helping her/him build a communication community (Strolz 1997: 195) with the conference primary participants, that is a community for that specific act of communication, and secondly, that the introduction of the notion of familiarity and the attempt to quantify it by means of a familiarity coefficient can ensure a relatively high degree of objectivity in admission aptitude tests and quality assessment in general. 2. The SL Text Parameters Relevant to Its Comprehension 2.1. The Unidirectionality of the SL Text Delivery and the Multidirectionality of the Comprehension Process The delivery of the SL text flows in one direction along the temporal axis, therefore, on the surface, the activity may seem to look like a Markovian process. So the ideal text for the SIr would be one that can satisfy the basic unidirectionality requirement of Markovian processing, that is, a text with right- branching, which can be handled by means of a single left-to-right search (Garvin 1972: 87). Unfortunately, however, Markovian processes CANNOT provide a good account of what happens in text comprehension, as well as in text production, because in many cases in the processing of the linear sequence “A+B+C”, for example, it is impossible to make a decision about the meaning of “A” without having heard “B” or “C”. In (1), for example, one can correctly interpret the attributive function of the substantival forms in front of the head-noun ‘strategies’ only after hearing the latter, with the exception of the cases in which a clearly marked prosody can help the SIr predict the specific syntactic position of the nouns and avoid a false start, at least. (1) The numerous drug-rehabilitation, crime-prevention and job-training program design strategies have not yielded very good results
PAGE – 3 ============
Understanding the Source Language Text in Simultaneous Interpreting 47 The processing of such a left-branching sequence involves its transformation into a non-linear hierarchical structure (King & Just 1991: 580), which on its part requires temporary storage of word presentations, e.g. “A” or “B”, or of the whole sequence, thus placing an additional load on the Short Term Memory (STM) of the SIr. Nominal conglomerates like the subject in (1) have a complex content structure consisting of not only one, but in some cases a large configuration of predications (explicit and implicit) which are as if condensed into a nut-shell and whose analysis requires a greater computational capacity, most of all due to the fact that a great deal of mental effort is spent on the performance of “more- than-one-pass” operations and in the retention of the beginning of the phrase. What will be argued here is: /a/ that this type of condensation is one of the most important parameters of semantic density, of the highest possible relevance to the understanding of a spoken text in the conditions of SI, and that, due to this, its quantitative measurement can help us to assess more precisely the comprehensibility of the SL text, and /b/ that a successful SIr obviously makes use of some sort of compensatory mechanisms conducive to cancelling the antinomy between the unidirectionality of the running SL text and the multidirectionality of the processing. 2.2. Methods of Measuring the Ease/Difficulty of the SL Text The Listenability Coefficient There have been many efforts to define, in numerical terms, the difficulty a text may present to the reader, and there are a number of formulae by means of which one can calculate the readability of a text. One such formula is what Flesch offered as early as 1948 (Miller 1951: 131-9), a formula which, however, is sometimes used, and in my view erroneously, to determine the difficulty of spoken texts as well, e.g. texts given for Listening Comprehension tests with multiple choice questions. Flesch’s Formula The first dimension in Flesch’s formula “Reading ease = 206.84 – 0.84W – 1.02S”, determining readability, namely “W”, refers to the number of syllables per 100 words. Obviously a higher number of polysyllabic words will yield a higher value of W, which – it is claimed – will be indicative of a lower reading ease. Such an approach, however, is irrelevant to SI for the following two basic reasons: /a/ The number of syllables can be relevant to SI only in terms of how many syllables (or rather combinations of consonants + vowels, and not written combinations of letters) are uttered per minute; this has already been studied
PAGE – 4 ============
Bistra Alexieva 48 and tables worked out about the minimum, optimum and maximum rates of delivery, and the way these affect the SIr’s performance (e.g. Shiryaev 1977); and /b/ A higher number of polysyllabic words will, in my view, facilitate comprehension in SI and not hamper it, for the uttering of polysyllabic word needs more time (i.e. they occupy a larger interval along the temporal axis) and they are usually double-stressed, which makes their identification and processing easier. Apart from that, interpreters are expected to know such words (unlike children, for whom Flesch’s formula was originally created), therefore the inclusion of this parameter in a formulaic expression referring to SI is irrelevant. The second parameter in Flesch’s formula “S”, representing the average number of words per sentence is also irrelevant to SI since, as can be seen from the two versions of (2) given below, it is not so much the number of words in a sentence that determines its COMPREHENSIBILITY, or LISTENABILITY (the term I suggest we can use as the spoken-medium counterpart to READABILITY), but the degree of explicitness of the semantic relationships between them, i.e. how many of the constituents of deep predications appear on the surface and how fully they are expressed. My claim is that texts with more explicit PNs, e.g. (2-a), are much easier to comprehend, via the auditory channel in particular, than texts with a smaller number of explicit PNs, e.g. (2-b) Œ although the first has more words (23 words) than the latter (17 words) Œ because the configuration of PNs representing its content is more explicit in the first than in the second. (2-a) The successful graduation rate from the army training camps is about 50 per cent, but this is not a reason to close them. (23 words) (2-b) The army training camps 50% successful graduation rate is not a reason to close them. (17 words) This claim finds support in the results received from: /a/ four interpreting classes (a total of about 50 students); /b/ summary writing exercises (4 groups of students, a total of about 60); /c/ multiple choice Listening Comprehension tests (with 4 groups of a total of 65 trainees in the courses organized for the Sofia University Admission Tests) and /d/ answers elicited from interpreters used as informants. The groups offered a version with more explicit PNs, for example (3-a), did incomparably better in all the three types of tests (SI, Summary Writing and Listening Comprehension) than the ones offered (3-b) – the version with a higher number of implicit PNs. What is more, almost all the gaps and errors in handling (3-b) are made in interpreting the highly condensed portions of the text, or the parts after them. For example, in almost 50 per cent of cases
PAGE – 5 ============
Understanding the Source Language Text in Simultaneous Interpreting 49 the difficulty in interpreting the long subject of the sentence at the beginning of the second paragraph caused a greater time lag, an inadequate handling of the following textual segment as well, or even its total omission. (3-a) The second considerable problem that may arise as a result of Switzerland’s joining the European Community is the Swiss frank. The Swiss frank is traditionally a strong currency, of which the Swiss are very proud, even though the currency has lost some of its glitter recently. But if the Maastrich Treaty is implemented as intended, then the currencies of all the member countries will simply disappear by the year 1997 or 1999 and will be replaced by a single currency which will have legal tender in each one of the Community countries. However, there are already signs that the Germans, for instance, are very unhappy to envisage the prospect of the Deutsche mark disappearing, therefore one can safely assume that the Swiss, too, will find it difficult to swallow if the Swiss Frank is replaced by the European ecu just two or three years after the Swiss join the Community. ( Explicit PNs = 16; implicit PNs=6) (3-b) The second considerable problem likely to arise as a result of Switzerland’s joining the European Community is the Swiss frank. The Swiss frank is a traditionally strong currency of which the Swiss are very proud in spite of its having lost some of its glitter recently. However, the intended implementation of the Maastrich Treaty will simply result in the disappearance of the currencies of all the member countries by the year 1997 or 1999 and in their replacement by a single currency legally valid in all the Community countries. The appearance of certain signs indicative of the Germans’ unhappiness about the prospect of the Deutsche mark disappearing makes it possible for us to safely assume that the replacement of the Swiss frank by the European ecu just two or three years after Switzerland’s joining the Community will be very difficult for the Swiss to swallow. (Explicit PNs = 6; implicit = 16) When calculated by Flesch’s formula, however, the differences between (3- a) and (3-b) are exactly the opposite: (3-b) is assessed as the easier one, since the value of W (words per sentence) for it is approximately 36, while for (3-a) it is higher – about 37.5; and the difference between the number of syllables is negligible. The lack of agreement between the results of the tests and enquiries mentioned above, on the one hand, and the values obtained via Flesch’s formula on the other, suggests that one should look for other ways of numerical assessment of listening ease (or difficulty) because in my view, the empirical evidence is more reliable.
PAGE – 6 ============
Bistra Alexieva 50 What, then, are the most important parameters determining a text’s listening ease or difficulty and how can we measure it? The Listenability Coefficient What I would suggest as an answer to the first question, on the basis of the empirical material described above, is that texts such as Version (3-b) are more difficult for listening comprehension due to: /a/ the high number of heavy nominal, participial and infinitival phrases representing condensed configurations of more than one (two, three or even more) implicit predications (PNs) and /b/ the lower number of explicitly given predications (PNs). If this type of condensation, or density (alongside of the rate of delivery) is of crucial importance for the comprehension of the SL text in the conditions of SI (i.e. with only one single chance of hearing the text), then the RATIO BETWEEN THE IMPLICIT AND EXPLICIT PNs can be expected to be a reliable indicator of its LISTENABILITY. I would therefore suggest that the LISTENABILITY of a text can be measured by using the formula Sigma Yn Kn = ŒŒŒŒŒŒŒŒŒŒŒ , either for calculating X Sigma PNexp (i) its Listening Ease (LE)= ŒŒŒŒŒŒŒŒŒŒŒŒ , where X is replaced by PNtotal PNtotal (the number of all the predications) and Yn – by PNexp (the number of the explicit PNs); for (3-a), for example, the Listening Ease (LE) is 16 8 LE = ŒŒŒŒ = ŒŒŒŒ = 0.73, while for (3-b) it is much lower, 22 11 6 3 LE = ŒŒŒŒ = ŒŒŒŒ = 0.27; or 22 11 Sigma PNimp (ii) its Listening Difficulty (LD) = ŒŒŒŒŒŒŒŒŒŒŒŒ , where X is PNtotal again replaced by the total number of PNs, while Yn – by PNimp (the number of the implicit, ‘condensed’, PNs).
PAGE – 8 ============
Bistra Alexieva 52 involved in the other three phases, i.e. the text-based analysis (Step A), the inferencing phase (Step C) and the coordination of the first three (Step D). Thus there is sufficient amount of data from real conference events as well as from experiments with SIr’s and written translators to support the contention that a successful SIr is an extremely good listener (and, in most cases, a much better listener than the conference participants), because his command of the SL is such that it permits her/him to make use of all types of clues and specific features of the phonic substance reaching his ears, prosodic features in particular. Thus, for example, an excellent command of prosody can help the SIr make predictions about the heavy nominal conglomerates discussed above and thus avoid false starts and/or an increase in the time lag. A weaker knowledge-based analysis phase would also involve a greater number of passes on a segment and more inferencing on the part of the SIr for the identification of the macro-predications of the content structure, and the bridging of the gaps between the results of the text-based analysis and those from the knowledge-based phase. All this suggests that a successful SIr can be also expected to have a very high computational capacity in terms of the ability to perform a greater number of mental operations per second than the normal listener. Apart from her/his good listening and computational abilities, however, the SIr must be making use of some compensatory mechanism to compensate for his Achilles’s heel – a poorer knowledge base. What I shall argue here is that the elaboration and employment of such a compensatory mechanism is possible if the SIr has the ability to get immersed in the conference topic and to gather information about it via different channels, gleaning all types of “prompters” consciously or unconsciously, thus increasing her/his familiarity with all the elements of the communicative situation and most of all, with the conference topic covered by the preceding part of the macrotext. And the quicker the accumulation of information and the greater FAMILIARITY increases, the more rapid will be the SIr’s integration into the conference communication community, consisting of the Primary Participants (the members of the discourse community of specialists in the respective field of knowledge or socio-political/economic sphere) and the Mediators (the SIrs). 3.1. The Feeling of Familiarity There can be no doubts about the importance of our knowledge stored in the LTM and its role in the understanding of a text, since there is supporting evidence coming from different theoretical quarters offering different models about the way knowledge is organised in our minds – as associative networks, or as scripts, frames and scenarios (see the discussion in Kintsch 1988: 163-6). But
PAGE – 9 ============
Understanding the Source Language Text in Simultaneous Interpreting 53 the question arises as to whether the neatly organised knowledge stored in the LTM is the only resource we can use in text comprehension, that is, whether there aren’t any loose ends, any loose traces of prior experience, traces of what we have seen or heard, be they individual words, phrases, sentences, bigger segments, or elements of communicative situations, conducive to the accumulation of more information, which can give rise to a feeling of familiarity and increase the SIr’s perceptual fluency. If this is the case, then we can go a step further and try to find out the answer to the following two questions: (i) are there differences between individuals in their aptitude for making use of these loose traces, and (ii) how can we ascertain that an interpreter has this ability? Our major claim here is that the building of a communication community depends, to a very great extent, on the ability of the SIr to increase her/his familiarity with the conference topic and the other elements of the communicative situation in the course of the proceedings. Earlier views on familiarity describe it as ‘the essence of remembering’ (James 1894; Pillsbury 1923; Titchener 1928, after Whittlesea et al. 1990: 716), a view which implies that “having and using a memory trace is a necessary and sufficient condition for the feeling of familiarity to arise” (Whittlesea et al. 1990: 716). More recent investigations, however, come to suggest that “the subjective experience of remembering can arise in the absence of a corresponding memory representation” (Whittlesea et al. 1990: 717) and that “subjective experience relies on an unconscious attributive or inferential process of the sort described by Helmholtz” (1910/1962, after Whittlesea, op.cit., p. 717). Helmholtz suggests that there is an unconscious inferencing process, based on the claim that “memory for prior experience contributes to subjective experience of a present stimulus, although people are generally unaware of the effects of the past on perception of the present” ( op. cit.). One might make the objection that suggestions such as the claim for a rather loose relationship between the feeling of familiarity and the use of memory representation might be irrelevant to SI, since here the SIr is often expected to understand texts of a rather complex content structure and that for the purpose one can make use of only well and neatly organised structures, hence the feeling of familiarity can be discussed only as connected with memory representations. However, there seems to be some evidence (though rather subjective, for it is mostly based on self-observation and answers of some of my colleagues) that often we can make use of what might at first glance seem insignificant, scraps of phrases, even combinations of sounds, read or heard, and inferencing on them, we may come to the right decision, e.g. about the type of suffix one may use in the RL for an SL term; or in deciphering a noun phrase containing a proper name standing for the product of the firm, etc.
PAGE – 10 ============
Bistra Alexieva 54 What I have been trying to suggest so far is that the introduction of the notion of FAMILIARITY (in both its versions Œ as related to memory recollections and as an unconscious attributive and inferencing process) may throw light on the comprehension process in SI, particularly because it is intimately connected with the distribution and re-distribution of attention. In his 1991 paper, on the basis of experimental data, Jacoby comes to the following conclusion: “The use of familiarity as a basis for recognition memory judgements (an automatic use of memory) is shown to be invariant across full versus divided attention, manipulated at test” (Jacoby 1991: 513). In other words, familiarity does not seem to be affected by the degree of attention concentration (whether full or divided), “while recollection – intentional use of memory is hampered when attention is divided” (op. cit .). Obviously, since SI is an interpreter-mediated event in which divided attention is the rule, rather than an exception, and which Œ with regards to the possibility to accumulate information Œ can be described as a cumulative process, it will be worth the effort of objectifying our intuition about the important role the feeling of familiarity can play in the process of comprehending the SL text and in diminishing the strain the SIr is working under. It will certainly be difficult, perhaps even impossible, to make an inventory of loose scraps of knowledge, loose ends from all our previous experience, and to study the way these may contribute to an easier and fuller understanding of a particular text. But what we can do in relation to the understanding of an SL text in the conditions of SI is to try and find out something about the role of the traces that might have been left in our memory from the beginning of the conference, i.e. from immediate past experience, because we can use as data all the recordings of the speeches delivered from the rostrum, as well as the written materials the SIr might have read before the conference or one of its sessions. Therefore, it seems possible to collect more solid evidence and try and find answers to a series of questions concerning (i) the elements and features of the SL text that can evoke in the SIr a feeling of familiarity of something already heard or seen, on word/ phrase/ sentence/ paragraph/ higher textual or prosodic level, and (ii) the effect that the SIr’s feeling of familiarity may have on diminishing stress. 3.2. The Familiarity Coefficient Unfortunately, the great variety of possible “trace makers” renders the study of familiarity rather difficult if we try to capture all of them, for one can hardly think of a formula which can take care of so many parameters.
PAGE – 11 ============
Understanding the Source Language Text in Simultaneous Interpreting 55 However, it is not only SI research that has encountered such problems. Luckily for us they have been solved in a more or less satisfactory way in other fields of study by choosing one or two of the most important parameters that could be expected to be indicative of the value of the remaining ones. What we shall argue here is that we can make a good start in our efforts to measure FAMILIARITY if we use, as a clue, the traces that might have been left by words seen or heard from the beginning of the conference. The choice of this parameter is promising because although analysis on the level of the word can hardly be recommended (whatever the type of translating/interpreting), words are the building material of a text and the way they are combined in higher structures and most of all, the frequency of their occurrence (verbatim or via synonyms, near-synonyms, etc.) may, to a great extent, affect the comprehensibility of a micro-text as part of the macro-text in SI (Alexieva 1985: 195). Therefore I would suggest, as a starting point in the study of Familiarity, to try and find its numerical expression by means of the formula Sigma Yn Kn = ŒŒŒŒŒŒŒŒŒ X (the same used for Listenability, see the previous section), where K is the Familiarity Coefficient (FC) of a text; X is the number of notional words used in it, and Yn is the number of the notional words occurring again (repetition of the same words, synonyms or near-synonyms; even antonyms, i.e. words related to one another in any of the possible semantic relationships Œ complete coincidence, overlapping, opposites, reversives, contiguity, etc.). This gives us the chance, by using the strategy of multi-stage sampling (Alexieva 1997, in Gile 1997), to take samples from every session of a conference in order to see what are the different values the Familiarity Coefficient may acquire. Thus Sample (6) with about 80 notional words, taken from the middle of the first morning session of a Conference (immediately after the Opening) has about 10 recurring notions and hence, a Familiarity Coefficient 10 1 K1 = ŒŒŒŒ = ŒŒŒŒ = approx.0.12. 80 8 The value of K for (7) with about the same number of notional words, taken from the afternoon session, however, is higher, since the number of recurring notions is about 20, hence
86 KB – 15 Pages