Search This Blog

Wednesday, November 9, 2022

Computer-assisted language learning

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Computer-assisted_language_learning

Computer-assisted language learning (CALL), British, or Computer-Aided Instruction (CAI)/Computer-Aided Language Instruction (CALI), American, is briefly defined in a seminal work by Levy (1997: p. 1) as "the search for and study of applications of the computer in language teaching and learning". CALL embraces a wide range of information and communications technology applications and approaches to teaching and learning foreign languages, from the "traditional" drill-and-practice programs that characterised CALL in the 1960s and 1970s to more recent manifestations of CALL, e.g. as used in a virtual learning environment and Web-based distance learning. It also extends to the use of corpora and concordancers, interactive whiteboards, computer-mediated communication (CMC), language learning in virtual worlds, and mobile-assisted language learning (MALL).

The term CALI (computer-assisted language instruction) was in use before CALL, reflecting its origins as a subset of the general term CAI (computer-assisted instruction). CALI fell out of favour among language teachers, however, as it appeared to imply a teacher-centred approach (instructional), whereas language teachers are more inclined to prefer a student-centred approach, focusing on learning rather than instruction. CALL began to replace CALI in the early 1980s (Davies & Higgins 1982: p. 3) and it is now incorporated into the names of the growing number of professional associations worldwide.

An alternative term, technology-enhanced language learning (TELL), also emerged around the early 1990s: e.g. the TELL Consortium project, University of Hull.

The current philosophy of CALL puts a strong emphasis on student-centred materials that allow learners to work on their own. Such materials may be structured or unstructured, but they normally embody two important features: interactive learning and individualised learning. CALL is essentially a tool that helps teachers to facilitate the language learning process. It can be used to reinforce what has already been learned in the classroom or as a remedial tool to help learners who require additional support.

The design of CALL materials generally takes into consideration principles of language pedagogy and methodology, which may be derived from different learning theories (e.g. behaviourist, cognitive, constructivist) and second-language learning theories such as Stephen Krashen's monitor hypothesis.

A combination of face-to-face teaching and CALL is usually referred to as blended learning. Blended learning is designed to increase learning potential and is more commonly found than pure CALL (Pegrum 2009: p. 27).

See Davies et al. (2011: Section 1.1, What is CALL?). See also Levy & Hubbard (2005), who raise the question Why call CALL "CALL"?

History

CALL dates back to the 1960s, when it was first introduced on university mainframe computers. The PLATO project, initiated at the University of Illinois in 1960, is an important landmark in the early development of CALL (Marty 1981). The advent of the microcomputer in the late 1970s brought computing within the range of a wider audience, resulting in a boom in the development of CALL programs and a flurry of publications of books on CALL in the early 1980s.

Dozens of CALL programs are currently available on the internet, at prices ranging from free to expensive, and other programs are available only through university language courses.

There have been several attempts to document the history of CALL. Sanders (1995) covers the period from the mid-1960s to the mid-1990s, focusing on CALL in North America. Delcloque (2000) documents the history of CALL worldwide, from its beginnings in the 1960s to the dawning of the new millennium. Davies (2005) takes a look back at CALL's past and attempts to predict where it is going. Hubbard (2009) offers a compilation of 74 key articles and book excerpts, originally published in the years 1988–2007, that give a comprehensive overview of the wide range of leading ideas and research results that have exerted an influence on the development of CALL or that show promise in doing so in the future. A published review of Hubbard's collection can be found in Language Learning & Technology 14, 3 (2010).

Butler-Pascoe (2011) looks at the history of CALL from a different point of view, namely the evolution of CALL in the dual fields of educational technology and second/foreign language acquisition and the paradigm shifts experienced along the way.

See also Davies et al. (2011: Section 2, History of CALL).

Typology and phases

During the 1980s and 1990s, several attempts were made to establish a CALL typology. A wide range of different types of CALL programs was identified by Davies & Higgins (1985), Jones & Fortescue (1987), Hardisty & Windeatt (1989) and Levy (1997: pp. 118ff.). These included gap-filling and Cloze programs, multiple-choice programs, free-format (text-entry) programs, adventures and simulations, action mazes, sentence-reordering programs, exploratory programs—and "total Cloze", a type of program in which the learner has to reconstruct a whole text. Most of these early programs still exist in modernised versions.

Since the 1990s, it has become increasingly difficult to categorise CALL as it now extends to the use of blogs, wikis, social networking, podcasting, Web 2.0 applications, language learning in virtual worlds and interactive whiteboards (Davies et al. 2010: Section 3.7).

Warschauer (1996) and Warschauer & Healey (1998) took a different approach. Rather than focusing on the typology of CALL, they identified three historical phases of CALL, classified according to their underlying pedagogical and methodological approaches:

  • Behavioristic CALL: conceived in the 1950s and implemented in the 1960s and 1970s.
  • Communicative CALL: 1970s to 1980s.
  • Integrative CALL: embracing Multimedia and the Internet: 1990s.

Most CALL programs in Warschauer & Healey's first phase, Behavioristic CALL (1960s to 1970s), consisted of drill-and-practice materials in which the computer presented a stimulus and the learner provided a response. At first, both could be done only through text. The computer would analyse students' input and give feedback, and more sophisticated programs would react to students' mistakes by branching to help screens and remedial activities. While such programs and their underlying pedagogy still exist today, behaviouristic approaches to language learning have been rejected by most language teachers, and the increasing sophistication of computer technology has led CALL to other possibilities.

The second phase described by Warschauer & Healey, Communicative CALL, is based on the communicative approach that became prominent in the late 1970s and 1980s (Underwood 1984). In the communicative approach the focus is on using the language rather than analysis of the language, and grammar is taught implicitly rather than explicitly. It also allows for originality and flexibility in student output of language. The communicative approach coincided with the arrival of the PC, which made computing much more widely available and resulted in a boom in the development of software for language learning. The first CALL software in this phase continued to provide skill practice but not in a drill format—for example: paced reading, text reconstruction and language games—but the computer remained the tutor. In this phase, computers provided context for students to use the language, such as asking for directions to a place, and programs not designed for language learning such as Sim City, Sleuth and Where in the World is Carmen Sandiego? were used for language learning. Criticisms of this approach include using the computer in an ad hoc and disconnected manner for more marginal aims rather than the central aims of language teaching.

The third phase of CALL described by Warschauer & Healey, Integrative CALL, starting from the 1990s, tried to address criticisms of the communicative approach by integrating the teaching of language skills into tasks or projects to provide direction and coherence. It also coincided with the development of multimedia technology (providing text, graphics, sound and animation) as well as Computer-mediated communication (CMC). CALL in this period saw a definitive shift from the use of the computer for drill and tutorial purposes (the computer as a finite, authoritative base for a specific task) to a medium for extending education beyond the classroom. Multimedia CALL started with interactive laser videodiscs such as Montevidisco (Schneider & Bennion 1984) and A la rencontre de Philippe (Fuerstenberg 1993), both of which were simulations of situations where the learner played a key role. These programs later were transferred to CD-ROMs, and new role-playing games (RPGs) such as Who is Oscar Lake? made their appearance in a range of different languages.

In a later publication Warschauer changed the name of the first phase of CALL from Behavioristic CALL to Structural CALL and also revised the dates of the three phases (Warschauer 2000):

  • Structural CALL: 1970s to 1980s.
  • Communicative CALL: 1980s to 1990s.
  • Integrative CALL: 2000 onwards.

Bax (2003) took issue with Warschauer & Haley (1998) and Warschauer (2000) and proposed these three phases:

  • Restricted CALL – mainly behaviouristic: 1960s to 1980s.
  • Open CALL – i.e. open in terms of feedback given to students, software types and the role of the teacher, and including simulations and games: 1980s to 2003 (i.e. the date of Bax's article).
  • Integrated CALL – still to be achieved. Bax argued that at the time of writing language teachers were still in the Open CALL phase, as true integration could only be said to have been achieved when CALL had reached a state of "normalisation" – e.g. when using CALL was as normal as using a pen.

See also Bax & Chambers (2006) and Bax (2011), in which the topic of "normalisation" is revisited.

Flashcards

A basic use of CALL is in vocabulary acquisition using flashcards, which requires quite simple programs. Such programs often make use of spaced repetition, a technique whereby the learner is presented with the vocabulary items that need to be committed to memory at increasingly longer intervals until long-term retention is achieved. This has led to the development of a number of applications known as spaced repetition systems (SRS), including the generic Anki or SuperMemo package and programs such as BYKI and phase-6, which have been designed specifically for learners of foreign languages.

Software design and pedagogy

Above all, careful consideration must be given to pedagogy in designing CALL software, but publishers of CALL software tend to follow the latest trend, regardless of its desirability. Moreover, approaches to teaching foreign languages are constantly changing, dating back to grammar-translation, through the direct method, audio-lingualism and a variety of other approaches, to the more recent communicative approach and constructivism (Decoo 2001).

Designing and creating CALL software is an extremely demanding task, calling upon a range of skills. Major CALL development projects are usually managed by a team of people:

  • A subject specialist (also known as a content provider) – usually a language teacher – who is responsible for providing the content and pedagogical input. More than one subject specialist is required for larger CALL projects.
  • A programmer who is familiar with the chosen programming language or authoring tool.
  • A graphic designer, to produce pictures and icons, and to advise on fonts, colour, screen layout, etc.
  • A professional photographer or, at the very least, a very good amateur photographer. Graphic designers often have a background in photography too.
  • A sound engineer and a video technician will be required if the package is to contain substantial amounts of sound and video.
  • An instructional designer. Developing a CALL package is more than just putting a text book into a computer. An instructional designer will probably have a background in cognitive psychology and media technology, and will be able to advise the subject specialists in the team on the appropriate use of the chosen technology (Gimeno & Davies 2010).

CALL inherently supports learner autonomy, the final of the eight conditions that Egbert et al. (2007) cite as "Conditions for Optimal Language Learning Environments". Learner autonomy places the learner firmly in control so that he or she "decides on learning goals" (Egbert et al., 2007, p. 8).

It is all too easy when designing CALL software to take the comfortable route and produce a set of multiple-choice and gap-filling exercises, using a simple authoring tool (Bangs 2011), but CALL is much more than this; Stepp-Greany (2002), for example, describes the creation and management of an environment incorporating a constructivist and whole language philosophy. According to constructivist theory, learners are active participants in tasks in which they "construct" new knowledge derived from their prior experience. Learners also assume responsibility for their learning, and the teacher is a facilitator rather than a purveyor of knowledge. Whole language theory embraces constructivism and postulates that language learning moves from the whole to the part, rather than building sub-skills to lead towards the higher abilities of comprehension, speaking, and writing. It also emphasises that comprehending, speaking, reading, and writing skills are interrelated, reinforcing each other in complex ways. Language acquisition is, therefore, an active process in which the learner focuses on cues and meaning and makes intelligent guesses. Additional demands are placed upon teachers working in a technological environment incorporating constructivist and whole language theories. The development of teachers' professional skills must include new pedagogical as well as technical and management skills. Regarding the issue of teacher facilitation in such an environment, the teacher has a key role to play, but there could be a conflict between the aim to create an atmosphere for learner independence and the teacher's natural feelings of responsibility. In order to avoid learners' negative perceptions, Stepp-Greany points out that it is especially important for the teacher to continue to address their needs, especially those of low-ability learners.

Multimedia

Language teachers have been avid users of technology for a very long time. Gramophone records were among the first technological aids to be used by language teachers in order to present students with recordings of native speakers' voices, and broadcasts from foreign radio stations were used to make recordings on reel-to-reel tape recorders. Other examples of technological aids that have been used in the foreign language classroom include slide projectors, film-strip projectors, film projectors, videocassette recorders and DVD players. In the early 1960s, integrated courses (which were often described as multimedia courses) began to appear. Examples of such courses are Ecouter et Parler (consisting of a coursebook and tape recordings) and Deutsch durch die audiovisuelle Methode (consisting of an illustrated coursebook, tape recordings and a film-strip – based on the Structuro-Global Audio-Visual method).

During the 1970s and 1980s standard microcomputers were incapable of producing sound and they had poor graphics capability. This represented a step backwards for language teachers, who by this time had become accustomed to using a range of different media in the foreign language classroom. The arrival of the multimedia computer in the early 1990s was therefore a major breakthrough as it enabled text, images, sound and video to be combined in one device and the integration of the four basic skills of listening, speaking, reading and writing (Davies 2011: Section 1).

Examples of CALL programs for multimedia computers that were published on CD-ROM and DVD from the mid-1990s onwards are described by Davies (2010: Section 3). CALL programs are still being published on CD-ROM and DVD, but Web-based multimedia CALL has now virtually supplanted these media.

Following the arrival of multimedia CALL, multimedia language centres began to appear in educational institutions. While multimedia facilities offer many opportunities for language learning with the integration of text, images, sound and video, these opportunities have often not been fully utilised. One of the main promises of CALL is the ability to individualise learning but, as with the language labs that were introduced into educational institutions in the 1960s and 1970s, the use of the facilities of multimedia centres has often devolved into rows of students all doing the same drills (Davies 2010: Section 3.1). There is therefore a danger that multimedia centres may go the same way as the language labs. Following a boom period in the 1970s, language labs went rapidly into decline. Davies (1997: p. 28) lays the blame mainly on the failure to train teachers to use language labs, both in terms of operation and in terms of developing new methodologies, but there were other factors such as poor reliability, lack of materials and a lack of good ideas.

Managing a multimedia language centre requires not only staff who have a knowledge of foreign languages and language teaching methodology but also staff with technical know-how and budget management ability, as well as the ability to combine all these into creative ways of taking advantage of what the technology can offer. A centre manager usually needs assistants for technical support, for managing resources and even the tutoring of students. Multimedia centres lend themselves to self-study and potentially self-directed learning, but this is often misunderstood. The simple existence of a multimedia centre does not automatically lead to students learning independently. Significant investment of time is essential for materials development and creating an atmosphere conducive to self-study. Unfortunately, administrators often have the mistaken belief that buying hardware by itself will meet the needs of the centre, allocating 90% of its budget to hardware and virtually ignoring software and staff training needs (Davies et al. 2011: Foreword). Self-access language learning centres or independent learning centres have emerged partially independently and partially in response to these issues. In self-access learning, the focus is on developing learner autonomy through varying degrees of self-directed learning, as opposed to (or as a complement to) classroom learning. In many centres learners access materials and manage their learning independently, but they also have access to staff for help. Many self-access centres are heavy users of technology and an increasing number of them are now offering online self-access learning opportunities. Some centres have developed novel ways of supporting language learning outside the context of the language classroom (also called 'language support') by developing software to monitor students' self-directed learning and by offering online support from teachers. Centre managers and support staff may need to have new roles defined for them to support students' efforts at self-directed learning: v. Mozzon-McPherson & Vismans (2001), who refer to a new job description, namely that of the "language adviser".

Internet

The emergence of the World Wide Web (now known simply as "the Web") in the early 1990s marked a significant change in the use of communications technology for all computer users. Email and other forms of electronic communication had been in existence for many years, but the launch of Mosaic, the first graphical Web browser, in 1993 brought about a radical change in the ways in which we communicate electronically. The launch of the Web in the public arena immediately began to attract the attention of language teachers. Many language teachers were already familiar with the concept of hypertext on stand-alone computers, which made it possible to set up non-sequential structured reading activities for language learners in which they could point to items of text or images on a page displayed on the computer screen and branch to any other pages, e.g. in a so-called "stack" as implemented in the HyperCard program on Apple Mac computers. The Web took this one stage further by creating a worldwide hypertext system that enabled the user to branch to different pages on computers anywhere in the world simply by pointing and clicking at a piece of text or an image. This opened up access to thousands of authentic foreign-language websites to teachers and students that could be used in a variety of ways. A problem that arose, however, was that this could lead to a good deal of time-wasting if Web browsing was used in an unstructured way (Davies 1997: pp. 42–43), and language teachers responded by developing more structured activities and online exercises (Leloup & Ponterio 2003). Davies (2010) lists over 500 websites, where links to online exercises can be found, along with links to online dictionaries and encyclopaedias, concordancers, translation aids and other miscellaneous resources of interest to the language teacher and learner.

The launch of the (free) Hot Potatoes (Holmes & Arneil) authoring tool, which was first demonstrated publicly at the EUROCALL 1998 conference, made it possible for language teachers to create their own online interactive exercises. Other useful tools are produced by the same authors.

In its early days the Web could not compete seriously with multimedia CALL on CD-ROM and DVD. Sound and video quality was often poor, and interaction was slow. But now the Web has caught up. Sound and video are of high quality and interaction has improved tremendously, although this does depend on sufficient bandwidth being available, which is not always the case, especially in remote rural areas and developing countries. One area in which CD-ROMs and DVDs are still superior is in the presentation of listen/respond/playback activities, although such activities on the Web are continually improving.

Since the early 2000s there has been a boom in the development of so-called Web 2.0 applications. Contrary to popular opinion, Web 2.0 is not a new version of the Web, rather it implies a shift in emphasis from Web browsing, which is essentially a one-way process (from the Web to the end-user), to making use of Web applications in the same way as one uses applications on a desktop computer. It also implies more interaction and sharing. Walker, Davies & Hewer (2011: Section 2.1) list the following examples of Web 2.0 applications that language teachers are using:

There is no doubt that the Web has proved to be a main focus for language teachers, who are making increasingly imaginative use of its wide range of facilities: see Dudeney (2007) and Thomas (2008). Above all, the use of Web 2.0 tools calls for a careful reexamination of the role of the teacher in the classroom (Richardson 2006).

Corpora and concordancers

Corpora have been used for many years as the basis of linguistic research and also for the compilation of dictionaries and reference works such as the Collins Cobuild series, published by HarperCollins. Tribble & Barlow (2001), Sinclair (2004) and McEnery & Wilson (2011) describe a variety of ways in which corpora can be used in language teaching.

An early reference to the use of electronic concordancers in language teaching can be found in Higgins & Johns (1984: pp. 88–94), and many examples of their practical use in the classroom are described by Lamy & Klarskov Mortensen (2010).

It was Tim Johns (1991), however, who raised the profile of the use of concordancers in the language classroom with his concept of Data-driven learning (DDL). DDL encourages learners to work out their own rules about the meaning of words and their usage by using a concordancer to locate examples in a corpus of authentic texts. It is also possible for the teacher to use a concordancer to find examples of authentic usage to demonstrate a point of grammar or typical collocations, and to generate exercises based on the examples found. Various types of concordancers and where they can be obtained are described by Lamy & Klarskov Mortensen (2011).

Robb (2003) shows how it is possible to use Google as a concordancer, but he also points out a number of drawbacks, for instance there is no control over the educational level, nationality, or other characteristics of the creators of the texts that are found, and the presentation of the examples is not as easy to read as the output of a dedicated concordancer that places the key words (i.e. the search terms) in context.

Virtual worlds

Virtual worlds date back to the adventure games and simulations of the 1970s, for example Colossal Cave Adventure, a text-only simulation in which the user communicated with the computer by typing commands at the keyboard. Language teachers discovered that it was possible to exploit these text-only programs by using them as the basis for discussion. Jones G. (1986) describes an experiment based on the Kingdom simulation, in which learners played roles as members of a council governing an imaginary kingdom. A single computer in the classroom was used to provide the stimulus for discussion, namely simulating events taking place in the kingdom: crop planting time, harvest time, unforeseen catastrophes, etc.

The early adventure games and simulations led on to multi-user variants, which were known as MUDs (Multi-user domains). Like their predecessors, MUDs were text-only, with the difference that they were available to a wider online audience. MUDs then led on to MOOs (Multi-user domains object-oriented), which language teachers were able to exploit for teaching foreign languages and intercultural understanding: see Donaldson & Kötter (1999) and (Shield 2003).

The next major breakthrough in the history of virtual worlds was the graphical user interface. Lucasfilm's Habitat (1986), was one of the first virtual worlds that was graphically based, albeit only in a two-dimensional environment. Each participant was represented by a visual avatar who could interact with other avatars using text chat.

Three-dimensional virtual worlds such as Traveler and Active Worlds, both of which appeared in the 1990s, were the next important development. Traveler included the possibility of audio communication (but not text chat) between avatars who were represented as disembodied heads in a three-dimensional abstract landscape. Svensson (2003) describes the Virtual Wedding Project, in which advanced students of English made use of Active Worlds as an arena for constructivist learning.

The 3D world of Second Life was launched in 2003. Initially perceived as another role-playing game (RPG), it began to attract the interest of language teachers with the launch of the first of the series of SLanguages conferences in 2007. Walker, Davies & Hewer (2011: Section 14.2.1) and Molka-Danielsen & Deutschmann (2010) describe a number of experiments and projects that focus on language learning in Second Life. See also the Wikipedia article Virtual world language learning.

To what extent Second Life and other virtual worlds will become established as important tools for teachers of foreign languages remains to be seen. It has been argued by Dudeney (2010) in his That's Life blog that Second Life is "too demanding and too unreliable for most educators". The subsequent discussion shows that this view is shared by many teachers, but many others completely disagree.

Regardless of the pros and cons of Second Life, language teachers' interest in virtual worlds continues to grow. The joint EUROCALL/CALICO Virtual Worlds Special Interest Group was set up in 2009, and there are now many areas in Second Life that are dedicated to language learning and teaching, for example the commercial area for learners of English, which is managed by Language Lab, and free areas such as the region maintained by the Goethe-Institut and the EduNation Islands. There are also examples of simulations created specifically for language education, such as those produced by the EC-funded NIFLAR and AVALON projects. NIFLAR is implemented both in Second Life and in Opensim.

Human language technologies

Human language technologies (HLT) comprise a number of areas of research and development that focus on the use of technology to facilitate communication in a multilingual information society. Human language technologies are areas of activity in departments of the European Commission that were formerly grouped under the heading language engineering (Gupta & Schulze 2011: Section 1.1).

The parts of HLT that is of greatest interest to the language teacher is natural language processing (NLP), especially parsing, as well as the areas of speech synthesis and speech recognition.

Speech synthesis has improved immeasurably in recent years. It is often used in electronic dictionaries to enable learners to find out how words are pronounced. At word level, speech synthesis is quite effective, the artificial voice often closely resembling a human voice. At phrase level and sentence level, however, there are often problems of intonation, resulting in speech production that sounds unnatural even though it may be intelligible. Speech synthesis as embodied in text to speech (TTS) applications is invaluable as a tool for unsighted or partially sighted people. Gupta & Schulze (2010: Section 4.1) list several examples of speech synthesis applications.

Speech recognition is less advanced than speech synthesis. It has been used in a number of CALL programs, in which it is usually described as automatic speech recognition (ASR). ASR is not easy to implement. Ehsani & Knodt (1998) summarise the core problem as follows:

"Complex cognitive processes account for the human ability to associate acoustic signals with meanings and intentions. For a computer, on the other hand, speech is essentially a series of digital values. However, despite these differences, the core problem of speech recognition is the same for both humans and machines: namely, of finding the best match between a given speech sound and its corresponding word string. Automatic speech recognition technology attempts to simulate and optimize this process computationally."

Programs embodying ASR normally provide a native speaker model that the learner is requested to imitate, but the matching process is not 100% reliable and may result in a learner's perfectly intelligible attempt to pronounce a word or phrase being rejected (Davies 2010: Section 3.4.6 and Section 3.4.7).

Parsing is used in a number of ways in CALL. Gupta & Schulze (2010: Section 5) describe how parsing may be used to analyse sentences, presenting the learner with a tree diagram that labels the constituent parts of speech of a sentence and shows the learner how the sentence is structured.

Parsing is also used in CALL programs to analyse the learner's input and diagnose errors. Davies (2002) writes:

"Discrete error analysis and feedback were a common feature of traditional CALL, and the more sophisticated programs would attempt to analyse the learner's response, pinpoint errors, and branch to help and remedial activities. ... Error analysis in CALL is, however, a matter of controversy. Practitioners who come into CALL via the disciplines of computational linguistics, e.g. Natural Language Processing (NLP) and Human Language Technologies (HLT), tend to be more optimistic about the potential of error analysis by computer than those who come into CALL via language teaching. [...] An alternative approach is the use of Artificial Intelligence (AI) techniques to parse the learner's response – so-called intelligent CALL (ICALL) – but there is a gulf between those who favour the use of AI to develop CALL programs (Matthews 1994) and, at the other extreme, those who perceive this approach as a threat to humanity (Last 1989:153)".

Underwood (1989) and Heift & Schulze (2007) present a more positive picture of AI.

Research into speech synthesis, speech recognition and parsing and how these areas of NLP can be used in CALL are the main focus of the NLP Special Interest Group within the EUROCALL professional association and the ICALL Special Interest Group within the CALICO professional association. The EUROCALL NLP SIG also maintains a Ning.

Impact

The question of the impact of CALL in language learning and teaching has been raised at regular intervals ever since computers first appeared in educational institutions (Davies & Hewer 2011: Section 3). Recent large-scale impact studies include the study edited by Fitzpatrick & Davies (2003) and the EACEA (2009) study, both of which were produced for the European Commission.

A distinction needs to be made between the impact and the effectiveness of CALL. Impact may be measured quantitatively and qualitatively in terms of the uptake and use of ICT in teaching foreign languages, issues of availability of hardware and software, budgetary considerations, Internet access, teachers' and learners' attitudes to the use of CALL, changes in the ways in which languages are learnt and taught, and paradigm shifts in teachers' and learners' roles. Effectiveness, on the other hand, usually focuses on assessing to what extent ICT is a more effective way of teaching foreign languages compared to using traditional methods – and this is more problematic as so many variables come into play. Worldwide, the picture of the impact of CALL is extremely varied. Most developed nations work comfortably with the new technologies, but developing nations are often beset with problems of costs and broadband connectivity. Evidence on the effectiveness of CALL – as with the impact of CALL – is extremely varied and many research questions still need to be addressed and answered. Hubbard (2002) presents the results of a CALL research survey that was sent to 120 CALL professionals from around the world asking them to articulate a CALL research question they would like to see answered. Some of the questions have been answered but many more remain open. Leakey (2011) offers an overview of current and past research in CALL and proposes a comprehensive model for evaluating the effectiveness of CALL platforms, programs and pedagogy.

A crucial issue is the extent to which the computer is perceived as taking over the teacher's role. Warschauer (1996: p. 6) perceived the computer as playing an "intelligent" role, and claimed that a computer program "should ideally be able to understand a user's spoken input and evaluate it not just for correctness but also for appropriateness. It should be able to diagnose a student's problems with pronunciation, syntax, or usage and then intelligently decide among a range of options (e.g. repeating, paraphrasing, slowing down, correcting, or directing the student to background explanations)." Jones C. (1986), on the other hand, rejected the idea of the computer being "some kind of inferior teacher-substitute" and proposed a methodology that focused more on what teachers could do with computer programs rather than what computer programs could do on their own: "in other words, treating the computer as they would any other classroom aid". Warschauer's high expectations in 1996 have still not been fulfilled, and currently there is an increasing tendency for teachers to go down the route proposed by Jones, making use of a variety of new tools such as corpora and concordancers, interactive whiteboards and applications for online communication.

Since the advent of the Web there has been an explosion in online learning, but to what extent it is effective is open to criticism. Felix (2003) takes a critical look at popular myths attached to online learning from three perspectives, namely administrators, teachers and students. She concludes: "That costs can be saved in this ambitious enterprise is clearly a myth, as are expectations of saving time or replacing staff with machines."

As for the effectiveness of CALL in promoting the four skills, Felix (2008) claims that there is "enough data in CALL to suggest positive effects on spelling, reading and writing", but more research is needed in order to determine its effectiveness in other areas, especially speaking online. She claims that students' perceptions of CALL are positive, but she qualifies this claim by stating that the technologies need to be stable and well supported, drawing attention to concerns that technical problems may interfere with the learning process. She also points out that older students may not feel comfortable with computers and younger students may not possess the necessary metaskills for coping effectively in the challenging new environments. Training in computer literacy for both students and teachers is essential, and time constraints may pose additional problems. In order to achieve meaningful results she recommends "time-series analysis in which the same group of students is involved in experimental and control treatment for a certain amount of time and then switched – more than once if possible".

Types of technology training in CALL for language teaching professionals certainly vary. Within second language teacher education programs, namely pre-service course work, we can find "online courses along with face-to-face courses", computer technology incorporated into a more general second language education course, "technology workshops","a series of courses offered throughout the teacher education programs, and even courses specifically designed for a CALL certificate and a CALL graduate degree" The Organization for Economic Cooperation and Development has identified four levels of courses with only components, namely "web-supplemented, web-dependent, mixed mod and fully online".

There is a rapidly growing interest in resources about the use of technology to deliver CALL. Journals that have issues that "deal with how teacher education programs help prepare language teachers to use technology in their own classrooms" include Language Learning and Technology (2002), Innovations in Language Learning and Teaching (2009) and the TESOL international professional association's publication of technology standards for TESOL includes a chapter on preparation of teacher candidates in technology use, as well as the upgrading of teacher educators to be able to provide such instruction. Both CALICO and EUROCALL have special interest groups for teacher education in CALL.

Professional associations

The following professional associations are dedicated to the promulgation of research, development and practice relating to the use of new technologies in language learning and teaching. Most of them organise conferences and publish journals on CALL.

  • APACALL: The Asia-Pacific Association for Computer-Assisted Language Learning. APACALL publishes the APACALL Book Series and APACALL Newsletter Series.
  • AsiaCALL: The Asia Association of Computer Assisted Language Learning, Korea. AsiaCALL publishes the AsiaCALL Online Journal.
  • Association of University Language Centres (AULC) in the UK and Ireland.
  • CALICO: Established in 1982. Currently based at Texas State University, USA. CALICO publishes the CALICO Journal.
  • EUROCALL: Founded by a group of enthusiasts in 1986 and established with the aid of European Commission funding as a formal professional association in 1993. Currently based at the University of Ulster, Northern Ireland. EUROCALL's journal, ReCALL, is published by Cambridge University Press. EUROCALL also publishes the EUROCALL Review.
  • IALLT: The US-based International Association for Language Learning Technology, originally known as IALL (International Association for Learning Labs). IALLT is a professional organisation dedicated to promoting effective uses of media centres for language teaching, learning, and research. IALLT published the IALLT Journal until 2018. In early 2019, IALLT officially merged the journal into the Foreign Language Technology Magazine (FLTMAG).
  • IATEFL: The UK-based International Association of Teachers of English as a Foreign Language. IATEFL embraces the Learning Technologies Special Interest Group (LTSIG) and publishes the CALL Review newsletter.
  • JALTCALL: Japan. The JALT CALL SIG publishes The JALT CALL Journal.
  • IndiaCALL:The India Association of Computer Assisted Language Learning. IndiaCALL is an affiliate of AsiaCALL, an associate of IATEFL, and an IALLT Regional Group.
  • LET: The Japan Association for Language Education and Technology (LET), formerly known as the Language Laboratory Association (LLA), and now embraces a wider range of language learning technologies.
  • PacCALL: The Pacific Association for Computer Assisted Language Learning, promoting CALL in the Pacific, from East to Southeast Asia, Oceania, across to the Americas. Organises the Globalization and Localization in Computer-Assisted Language Learning (GLoCALL) conference jointly with APACALL.
  • TCLT: Technology and Chinese Language Teaching, an organization of Chinese CALL studies in the United States, with biennial conference and workshops since 2000 and a double blind, peer-reviewed online publication-Journal of Technology and Chinese Language Teaching since 2010 and in-print supplement Series of Technology and Chinese Language Teaching in the U.S. with China Social Sciences Press since 2012.
  • WorldCALL: A worldwide umbrella association of CALL associations. The first WorldCALL conference was held at the University of Melbourne in 1998. The second WorldCALL conference took place in Banff, Canada, 2003. The third WorldCALL took place in Japan in 2008. The fourth WorldCALL conference took place in Glasgow, Scotland, 2013. The fifth WorldCALL conference took place in Concepción, Chile in 2018.

Surplus labour

From Wikipedia, the free encyclopedia

Surplus labour (German: Mehrarbeit) is a concept used by Karl Marx in his critique of political economy. It means labour performed in excess of the labour necessary to produce the means of livelihood of the worker ("necessary labour"). The "surplus" in this context means the additional labour a worker has to do in their job, beyond earning their keep. According to Marxian economics, surplus labour is usually uncompensated (unpaid) labour.

Origin

Marx explains the origin of surplus labour in the following terms:

"It is only after men have raised themselves above the rank of animals, when therefore their labour has been to some extent socialised, that a state of things arises in which the surplus-labour of the one becomes a condition of existence for the other. At the dawn of civilisation the productiveness acquired by labour is small, but so too are the wants which develop with and by the means of satisfying them. Further, at that early period, the portion of society that lives on the labour of others is infinitely small compared with the mass of direct producers. Along with the progress in the productiveness of labour, that small portion of society increases both absolutely and relatively. Besides, capital with its accompanying relations springs up from an economic soil that is the product of a long process of development. The productiveness of labour that serves as its foundation and starting-point, is a gift, not of nature, but of a history embracing thousands of centuries."

The historical emergence of surplus labour is, according to Marx, also closely associated with the growth of trade (the economic exchange of goods and services) and with the emergence of a society divided into social classes. As soon as a permanent surplus product can be produced, the moral-political question arises as to how it should be distributed, and for whose benefit surplus-labour should be performed. The strong defeat the weak, and it becomes possible for a social elite to gain control over the surplus-labour and surplus product of the working population; they can live off the labour of others.

Labour which is sufficiently productive so that it can perform surplus labour is, in a cash economy, the material foundation for the appropriation of surplus-value from that labour. How exactly this appropriation will occur, is determined by the prevailing relations of production and the balance of power between social classes.

According to Marx, capital had its origin in the commercial activity of buying in order to sell and rents of various types, with the aim of gaining an income (a surplus value) from this trade. But, initially, this does not involve any capitalist mode of production; rather, the merchant traders and rentiers are intermediaries between non-capitalist producers. During a lengthy historical process, the old ways of extracting surplus labour are gradually replaced by commercial forms of exploitation.

Historical materialism

In Das Kapital Vol. 3, Marx highlights the central role played by surplus labour:

"The specific economic form, in which unpaid surplus-labour is pumped out of direct producers, determines the relationship of rulers and ruled, as it grows directly out of production itself and, in turn, reacts upon it as a determining element. Upon this, however, is founded the entire formation of the economic community which grows up out of the production relations themselves, thereby simultaneously its specific political form. It is always the direct relationship of the owners of the conditions of production to the direct producers — a relation always naturally corresponding to a definite stage in the development of the methods of labour and thereby its social productivity — which reveals the innermost secret, the hidden basis of the entire social structure and with it the political form of the relation of sovereignty and dependence, in short, the corresponding specific form of the state. This does not prevent the same economic basis — the same from the standpoint of its main conditions — due to innumerable different empirical circumstances, natural environment, racial relations, external historical influences, etc. from showing infinite variations and gradations in appearance, which can be ascertained only by analysis of the empirically given circumstances."

This statement is a foundation of Marx's historical materialism insofar as it specifies what the class conflicts in civil society are ultimately about: an economy of time, which compels some to do work of which part or all of the benefits go to someone else, while others can have leisure-time which in reality depends on the work efforts of those forced to work.

In modern society, having work or leisure may often seem a choice, but for most of humanity, work is an absolute necessity, and consequently most people are concerned with the real benefits they get from that work. They may accept a certain rate of exploitation of their labour as an inescapable condition for their existence, if they depend on a wage or salary, but beyond that, they will increasingly resist it. Consequently, a morality or legal norm develops in civil society which imposes limits for surplus-labour, in one form or another. Forced labour, slavery, gross mistreatment of workers etc. are no longer generally acceptable, although they continue to occur; working conditions and pay levels can usually be contested in courts of law.

Unequal exchange

Marx acknowledged that surplus labour may not just be appropriated directly in production by the owners of the enterprise, but also in trade. This phenomenon is nowadays called unequal exchange. Thus, he commented that:

"From the possibility that profit may be less than surplus value, hence that capital [may] exchange profitably without realizing itself in the strict sense, it follows that not only individual capitalists, but also nations may continually exchange with one another, may even continually repeat the exchange on an ever-expanding scale, without for that reason necessarily gaining in equal degrees. One of the nations may continually appropriate for itself a part of the surplus labour of the other, giving back nothing for it in the exchange, except that the measure here [is] not as in the exchange between capitalist and worker."

In this case, more work effectively exchanges for less work, and a greater value exchanges for a lesser value, because some possess a stronger market position, and others a weaker one. For the most part, Marx assumed equal exchange in Das Kapital, i.e. that supply and demand would balance; his argument was that even if, ideally speaking, no unequal exchange occurred in trade, and market equality existed, exploitation could nevertheless occur within capitalist relations of production, since the value of the product produced by labour power exceeded the value of labour power itself. Marx never completed his analysis of the world market however.

In the real world, Marxian economists like Samir Amin argue, unequal exchange occurs all the time, implying transfers of value from one place to another, through the trading process. Thus, the more trade becomes "globalised", the greater the intermediation between producers and consumers; consequently, the intermediaries appropriate a growing fraction of the final value of the products, while the direct producers obtain only a small fraction of that final value.

The most important unequal exchange in the world economy nowadays concerns the exchange between agricultural goods and industrial goods, i.e. the terms of trade favour industrial goods against agricultural goods. Often, as Raul Prebisch already noted, this has meant that more and more agricultural output must be produced and sold, to buy a given amount of industrial goods. This issue has become the subject of heated controversy at recent WTO meetings.

The practice of unequal or unfair exchange does not presuppose the capitalist mode of production, nor even the existence of money. It only presupposes that goods and services of unequal value are traded, something which has been possible throughout the whole history of human trading practices.

Criticism of Marx's concept of surplus labour

According to economist Fred Moseley, "neoclassical economic theory was developed, in part, to attack the very notion of surplus labour or surplus value and to argue that workers receive all of the value embodied in their creative efforts."

Some basic modern criticisms of Marx's theory can be found in the works by Pearson, Dalton, Boss, Hodgson and Harris (see references).

The analytical Marxist John Roemer challenges what he calls the "fundamental Marxian theorem" (after Michio Morishima) that the existence of surplus labour is the necessary and sufficient condition for profits. He proves that this theorem is logically false. However, Marx himself never argued that surplus labour was a sufficient condition for profits, only an ultimate necessary condition (Morishima aimed to prove that, starting from the existence of profit expressed in price terms, we can deduce the existence of surplus value as a logical consequence). Five reasons were that:

  • profit in a capitalist operation was "ultimately" just a financial claim to products and labour services made by those who did not themselves produce those products and services, in virtue of their ownership of private property (capital assets).
  • profits could be made purely in trading processes, which themselves could be far removed in space and time from the co-operative labour which those profits ultimately presupposed.
  • surplus labour could be performed, without this leading to any profits at all, because e.g. the products of that labour failed to be sold.
  • profits could be made without any labour being involved, such as when a piece of unimproved land is sold for a profit.
  • profits could be made by a self-employed operator who did not perform surplus labour for somebody else, nor necessarily appropriated surplus labour from anywhere else.

All that Marx really argued was that surplus labour was a necessary feature of the capitalist mode of production as a general social condition. If that surplus labour did not exist, other people could not appropriate that surplus labour or its products simply through their ownership of property.

Also, the amount of unpaid, voluntary and housework labour performed outside the world of business and industry, as revealed by time use surveys, suggests to some feminists (e.g. Marilyn Waring and Maria Mies) that Marxists may have overrated the importance of industrial surplus labour performed by salaried employees, because the very ability to perform that surplus-labour, i.e. the continual reproduction of labour power depends on all kinds of supports involving unremunerated work (for a theoretical discussion, see the reader by Bonnie Fox). In other words, work performed in households—often by those who do not sell their labour power to capitalist enterprises at all—contributes to the sustenance of capitalist workers who may perform little household labour.

Possibly the controversy about the concept is distorted by the enormous differences with regard to the world of work:

  • in Europe, the United States, Japan and Australasia,
  • the newly industrialising countries, and
  • the poor countries.

Countries differ greatly with respect to the way they organise and share out work, labour participation rates, and paid hours worked per year, as can be easily verified from ILO data (see also Rubery & Grimshaw's text). The general trend in the world division of labour is for hi-tech, financial and marketing services to be located in the richer countries, which hold most intellectual property rights and actual physical production to be located in low-wage countries. Effectively, Marxian economists argue, this means that the labour of workers in wealthy countries is valued higher than the labour of workers in poorer countries. However, they predict that in the long run of history, the operation of the law of value will tend to equalize the conditions of production and sales in different parts of the world.

Medical community of ancient Rome

Symbolic statue of Asclepius holding the Rod of Asclepius, in later times was confused with the caduceus, which has two snakes

Medical community as used in this article refers to medical institutions and services offered to populations under the jurisdiction of the late Roman Republic and the Roman Empire.

Background

Medical services of the late Roman Republic and early Roman Empire were mainly imports from the civilization of Ancient Greece, at first through Greek-influenced Etruscan society and Greek colonies placed directly in Italy, and then through Greeks enslaved during the Roman conquest of Greece, Greeks invited to Rome, or Greek knowledge imparted to Roman citizens visiting or being educated in Greece. A perusal of the names of Roman physicians will show that the majority are wholly or partly Greek and that many of the physicians were of servile origin.

The servility stigma came from the accident of a more medically advanced society being conquered by a lesser. One of the cultural ironies of these circumstances is that free men sometimes found themselves in service to the enslaved professional or dignitary, or the power of the state was entrusted to foreigners who had been conquered in battle and were technically slaves. In Greek society, physicians tended to be regarded as noble. Asclepius in Homer's Iliad is noble.

Importation from Greece

Public medicine

Tiber Island today, site of the first Aesculapium in Rome

A signal event in the Roman medical community was the construction of the first Aesculapium (a temple to the god of healing) in the city of Rome, on Tiber Island. In 293 BC, some officials consulted the Sibylline Books concerning measures to be taken against the plague and were advised to bring Aesculapius from Epidaurus to Rome. The sacred serpent from Epidaurus was conferred ritually on the new temple, or, in some accounts, the serpent escaped from the ship and swam to the island. Baths have been found there as well as votive offerings (donaria) in the shape of specific organs. In classical times the center covered the entire island and included a long-term recovery center. The emperor Claudius had a law passed granting freedom to slaves who had been sent to the institution for cure but were abandoned there. This law probably facilitated state disposition of the patients and recovery of the beds they occupied. The details are not available.

Ruins of the Temple of Aesculapius, Tiber Island, Rome

It was not the first time a temple had been constructed at Rome to ward off plague. The consul, Gnaeus Julius Mento, one of two for the year 431 BC, dedicated a temple to Apollo Medicus ("the healer"). There was also a temple to salus ("health") on the Mons Salutaris, a spur of the Quirinal. There is no record that these earlier temples possessed the medical facilities associated with an Aesculapium; in that case, the later decision to bring them in presupposes a new understanding that scientific measures could be taken against plague. The memorable description of plague at Athens during the Peloponnesian War (430 BC) by Thucydides does not mention any measures at all to relieve those stricken with it. The dying were allowed to accumulate at the wells, which they contaminated, and the deceased to pile up there. At Rome, Cicero criticized the worship of evil powers, such as Febris ("Fever"), Dea Mefitis ("Malaria"), Dea Angerona ("Sore Throat") and Dea Scabies ("Rash").

The medical art in early Rome was the responsibility of the pater familias, or patriarch. The last known public advocate of this point of view were the railings of Marcus Cato against Greek physicians and his insistence on passing on home remedies to his son. The importation of the Aesculapium established medicine in the public domain. There is no record of fees being collected for a stay at one of them, at Rome or elsewhere. The expense of an Aesculapium must have been defrayed in the same way as all temple expenses: individuals vowed to perform certain actions or contribute a certain amount if certain events happened, some of which were healings. Such a system amounts to gradated contributions by income, as the contributor could only vow what he could provide. The building of a temple and its facilities on the other hand was the responsibility of the magistrates. The funds came from the state treasury or from taxes.

Private medicine

A second signal act marked the start of sponsorship of private medicine by the state as well. In the year 219 BC (Lucius Aemilius Paullus and Marcus Livius Salinator were consuls), a vulnerarius, or surgeon, Archagathus, visited Rome from the Peloponnesus and was asked to stay. The state conferred citizenship on him and purchased him a taberna, or shop, near the compitium Acilii (a crossroads), which became the first officina medica.

The doctor necessarily had many assistants. Some prepared and vended medicines and tended the herb garden. There were pharmacopolae (note the female ending), unguentarii and aromatarii, all of which names are easily understood by the English reader. Others attended the doctor when required (the capsarii; they prepared and carried the doctor's capsa, or bag.). Jerome Carcopino's study of occupational names in the Corpus Inscriptionum Latinarum turned up 51 medici, 4 medicae (female doctors), an obstetrix ("midwife") and a nutrix ("nurse") in the city of Rome. These numbers, of course, are at best proportional to the true populations, which were many times greater.

At the bottom of the scale were the ubiquitous discentes ("those learning") or medical apprentices. Roman doctors of any stature combed the population for persons in any social setting who had an interest in and ability for practicing medicine. On the one hand the doctor used their services unremittingly. On the other they were treated like members of the family; i.e., they came to stay with the doctor and when they left they were themselves doctors. The best doctors were the former apprentices of the Aesculapia, who, in effect, served residencies there.

The practice of medicine

A fragment of the Hippocratic Oath on the 3rd-century Papyrus Oxyrhynchus

Medical values

The Romans valued a state of valetudo, salus or sanitas. They began their correspondence with the salutation si vales valeo, "if you are well, I am" and ended it with salve, "be healthy". The Indo-European roots are *wal-, "be strong", a wholeness were to some degree perpetuated by right living. The Hippocratic Oath obliges doctors to live rightly (setting an example). The first cause thought of when people got sick was that they did not live rightly. Vegetius' brief section on the health of a Roman legion states only that a legion can avoid disease by staying out of malarial swamps, working out regularly and living a healthy life.

Despite their best efforts people from time to time did become aeger, "sick". They languished, had nausea (words of Roman extraction) or "fell" (incidere) in morbum. They were vexed and dolorous. At that point they were in need of the medica res, the men skilled in the ars medicus, who would curare morbum, "have a care for the disease", who went by the name of medicus or medens. The root is *med-, "measure". The medicus prescribed medicina or regimina as measures against the disease.

The physician

The next step was to secure the cura of a medicus. If the patient was too sick to move one sent for a clinicus, who went to the clinum or couch of the patient. Of higher status were the chirurgii (which became the English word surgeon), from Greek cheir (hand) and ourgon (work). In addition were the eye doctor, ocularius, the ear doctor, auricularius, and the doctor of snakebites, the marsus.

That the poor paid a minimal fee for the visit of a medicus is indicated by a wisecrack in Plautus: "It was less than a nummus." Many anecdotes exist of doctors negotiating fees with wealthy patients and refusing to prescribe a remedy if agreement was not reached. Pliny says:

I will not accuse the medical art of the avarice even of its professors, the rapacious bargains made with their patients while their fate is trembling in the balance, …

The fees charged were on a sliding scale according to assets. The physicians of the rich were themselves rich. For example, Antonius Musa treated Augustus' nervous symptoms with cold baths and drugs. He was not only set free but he became Augustus' physician. He received a salary of 300,000 sesterces. There is no evidence that he was other than a private physician; that is, he was not working for the Roman government.

Legal responsibility

Doctors were generally exempt from prosecution for their mistakes. Some writers complain of legal murder. However, holding the powerful up to exorbitant fees ran the risk of retaliation. Pliny reports that the emperor Claudius fined a physician, Alcon, 180 million sesterces and exiled him to Gaul, but that on his return he made the money back in just a few years. Pliny does not say why the physician was exiled, but the blow against the man was struck on his pocketbook. He could make no such income in Gaul.

This immunity applied only to mistakes made in the treatment of free men. By chance a law existed at Rome, the Lex Aquilia, passed about 286 BC, which allowed the owners of slaves and animals to seek remedies for damage to their property, either malicious or negligent. Litigants used this law to proceed against the negligence of medici, such as the performance of an operation on a slave by an untrained surgeon resulting in death or other damage.

Social position

While encouraging and supporting the public and private practice of medicine, the Roman government tended to suppress organizations of medici in society. The constitution provided for the formation of occupational collegia, or guilds. The consuls and the emperors treated these ambivalently. Sometimes they were permitted; more often they were made illegal and were suppressed. The medici formed collegia, which had their own centers, the Scholae Medicorum, but they never amounted to a significant social force. They were regarded as subversive along with all the other collegia.

Doctors were nevertheless influential. They liked to write. Compared to the number of books written, not many have survived; for example, Tiberius Claudius Menecrates composed 150 medical works, of which only a few fragments remain. Some that did remain almost in entirety are the works of Galen, Celsus, Hippocrates and the herbal expert, Pedanius Dioscorides who wrote the 5-volume De materia medica. The Natural History of Pliny the Elder became a paradigm for all subsequent works like it and gave its name to the topic, although Pliny was not himself an observer of the natural world like Aristotle or Theophrastus, whose Enquiry into Plants included a book on their medicinal uses.

Military medical corps

Republican

The state of the military medical corps before Augustus is unclear. Corpsmen certainly existed at least for the administration of first aid and were enlisted soldiers rather than civilians. The commander of the legion was held responsible for removing the wounded from the field and insuring that they got sufficient care and time to recover. He could quarter troops in private domiciles if he thought necessary. Authors who have written of Roman military activities before Augustus, such as Livy, mention that wounded troops retired to population centers to recover.

Imperial

The army of the early empire was sharply and qualitatively different. Augustus defined a permanent professional army by setting the enlistment at 16 years (with an additional 4 for reserve obligations), and establishing a special military fund, the aerarium militare, imposing a 5% inheritance tax and 1% auction sales tax to pay for it. From it came bonus payments to retiring soldiers amounting to several years' salary. It could also have been used to guarantee regular pay. Previously legions had to rely on booty.

If military careers were now possible, so were careers for military specialists, such as medici. Under Augustus for the first time occupational names of officers and functions began to appear in inscriptions. The valetudinaria, or military versions of the aesculapia (the names mean the same thing) became features of permanent camps. Caches of surgical instruments have been found in some of them. From this indirect evidence it is possible to conclude to the formation of an otherwise unknown permanent medical corps.

In the early empire one finds milites medici who were immunes ("exempt") from other duties. Some were staff of the hospital, which Pseudo-Hyginus mentions in De Munitionibus Castrorum as being set apart from other buildings so that the patients can rest. The hospital administrator was an optio valetudinarii. The orderlies aren't generally mentioned, but they must have existed, as the patients needed care and the doctors had more important duties. Perhaps they were servile or civilians, not worth mentioning. There were some noscomi, male nurses not in the army. Or, they could have been the milites medici. The latter term might be any military medic or it might be orderlies detailed from the legion. There were also medici castrorum. Not enough information survives in the sources to say for certain what distinctions existed, if any.

The army of Augustus featured a standardized officer corps, described by Vegetius. Among them were the ordinarii, the officers of an ordo or rank. In an acies triplex there were three such ordines, the centuries (companies) of which were commanded by centurions. The ordinarii were therefore of the rank of a centurion but did not necessarily command one if they were staff.

The term medici ordinarii in the inscriptions must refer to the lowest ranking military physicians. During his reign, Augustus finally conferred the dignitas equestris, or social rank of knight, on all physicians, public or private. They were then full citizens (in case there were any Hellenic questions) and could wear the rings of knights. In the army there was at least one other rank of physician, the medicus duplicarius, "medic at double pay", and, as the legion had milites sesquiplicarii, "soldiers at 1.5 pay", perhaps the medics had that pay grade as well.

Augustan posts were named according to a formula containing the name of the rank and the unit commanded in the genitive case; e.g., the commander of a legion, who was a legate; that is, an officer appointed by the emperor, was the legatus legionis, "the legate of the legion." Those posts worked pretty much as today; a man on his way up the cursus honorum ("ladder of offices", roughly) would command a legion for a certain term and then move on.

The posts of medicus legionis and a medicus cohortis were most likely to be commanders of the medici of the legion and its cohorts. They were all under the praetor or camp commander, who might be the legatus but more often was under the legatus himself. There was, then, a medical corps associated with each camp. The cavalry alae ("wings") and the larger ships all had their medical officers, the medici alarum and the medici triremis respectively.

Practice

Capsarii depicted tending to injured soldiers on Trajan's Column

As far as can be determined, the medical corps in battle worked as follows. Trajan's Column depicts medics on the battlefield bandaging soldiers. They were located just behind the standards; i.e., near the field headquarters. This must have been a field aid station, not necessarily the first, as the soldiers or corpsmen among the soldiers would have administered first aid before carrying their wounded comrades to the station. Some soldiers were designated to ride along the line on a horse picking up the wounded. They were paid by the number of men they rescued. Bandaging was performed by capsarii, who carried bandages (fascia) in their capsae, or bags.

From the aid station the wounded went by horse-drawn ambulance to other locations, ultimately to the camp hospitals in the area. There they were seen by the medici vulnerarii, or surgeons, the main type of military doctor. They were given a bed in the hospital if they needed it and one was available. The larger hospitals could administer 400–500 beds. If these were insufficient the camp commander probably utilized civilian facilities in the region or quartered them in the vici, "villages".

A base hospital was quadrangular with barracks-like wards surrounding a central courtyard. On the outside of the quadrangle were private rooms for the patients. Although unacquainted with bacteria, Roman medical doctors knew about contagion and did their best to prevent it. Rooms were isolated, running water carried the waste away, and the drinking and washing water was tapped up the slope from the latrines.

Within the hospital were operating rooms, kitchens, baths, a dispensary, latrines, a mortuary and herb gardens, as doctors relied heavily on herbs for drugs. The medici could treat any wound received in battle, as long as the patient was alive. They operated or otherwise treated with scalpels, hooks, levers, drills, probes, forceps, catheters and arrow-extractors on patients anesthetized with morphine (opium poppy extract) and scopolamine (henbane extract). Instruments were boiled before use. Wounds were washed in vinegar and stitched. Broken bones were placed in traction. There is, however, evidence of wider concerns. A vaginal speculum suggests gynecology was practiced, and an anal speculum implies knowledge that the size and condition of internal organs accessible through the orifices was an indication of health. They could extract eye cataracts with a special needle. Operating room amphitheaters indicate that medical education was ongoing. Many have proposed that the knowledge and practices of the medici were not exceeded until the 20th century.

Regulation of medicine

By the late empire the state had taken more of a hand in regulating medicine. The law codes of the 4th century, such as the Codex Theodosianus, paint a picture of a medical system enforced by the laws and the state apparatus. At the top was the equivalent of a surgeon general of the empire. He was by law a noble, a dux (duke) or a vicarius (vicar) of the emperor. He held the title of comes archiatorum, "count of the chief healers." The Greek word iatros, "healer", was higher-status than the Latin medicus.

Under the comes were a number of officials called the archiatri, or more popularly the protomedici, supra medicos, domini medicorum or superpositi medicorum. They were paid by the state. It was their function to supervise all the medici in their districts; i.e., they were the chief medical examiners. Their families were exempt from taxes. They could not be prosecuted nor could troops be quartered in their homes.

The archiatri were divided into two groups:

  • archiatri sancti palatii, who were palace physicians
  • archiatri populares. They were required to provide for the poor; presumably, the more prosperous still provided for themselves.

The archiatri settled all medical disputes. Rome had 14 of them; the number in other communities varied from 5 to 10 depending on the population.

Wealth inequality in the United States

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Wealth_inequality_in_the_United_States ...