Search This Blog

Monday, October 5, 2020

Intelligent tutoring system

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

An intelligent tutoring system (ITS) is a computer system that aims to provide immediate and customized instruction or feedback to learners, usually without requiring intervention from a human teacher. ITSs have the common goal of enabling learning in a meaningful and effective manner by using a variety of computing technologies. There are many examples of ITSs being used in both formal education and professional settings in which they have demonstrated their capabilities and limitations. There is a close relationship between intelligent tutoring, cognitive learning theories and design; and there is ongoing research to improve the effectiveness of ITS. An ITS typically aims to replicate the demonstrated benefits of one-to-one, personalized tutoring, in contexts where students would otherwise have access to one-to-many instruction from a single teacher (e.g., classroom lectures), or no teacher at all (e.g., online homework). ITSs are often designed with the goal of providing access to high quality education to each and every student.

History

Early mechanical systems

Skinner teaching machine 08

The possibility of intelligent machines have been discussed for centuries. Blaise Pascal created the first calculating machine capable of mathematical functions in the 17th century simply called Pascal's Calculator. At this time the mathematician and philosopher Gottfried Wilhelm Leibniz envisioned machines capable of reasoning and applying rules of logic to settle disputes (Buchanan, 2006). These early works contributed to the development of the computer and future applications.

The concept of intelligent machines for instructional use date back as early as 1924, when Sidney Pressey of Ohio State University created a mechanical teaching machine to instruct students without a human teacher. His machine resembled closely a typewriter with several keys and a window that provided the learner with questions. The Pressey Machine allowed user input and provided immediate feedback by recording their score on a counter.

Pressey himself was influenced by Edward L. Thorndike, a learning theorist and educational psychologist at the Columbia University Teacher College of the late 19th and early 20th centuries. Thorndike posited laws for maximizing learning. Thorndike's laws included the law of effect, the law of exercise, and the law of recency. Following later standards, Pressey's teaching and testing machine would not be considered intelligent as it was mechanically run and was based on one question and answer at a time, but it set an early precedent for future projects. By the 1950s and 1960s, new perspectives on learning were emerging. Burrhus Frederic "B.F." Skinner at Harvard University did not agree with Thorndike's learning theory of connectionism or Pressey's teaching machine. Rather, Skinner was a behaviourist who believed that learners should construct their answers and not rely on recognition. He too, constructed a teaching machine structured using an incremental mechanical system that would reward students for correct responses to questions.

Early electronic systems

In the period following the second world war, mechanical binary systems gave way to binary based electronic machines. These machines were considered intelligent when compared to their mechanical counterparts as they had the capacity to make logical decisions. However, the study of defining and recognizing a machine intelligence was still in its infancy.

Alan Turing, a mathematician, logician and computer scientist, linked computing systems to thinking. One of his most notable papers outlined a hypothetical test to assess the intelligence of a machine which came to be known as the Turing test. Essentially, the test would have a person communicate with two other agents, a human and a computer asking questions to both recipients. The computer passes the test if it can respond in such a way that the human posing the questions cannot differentiate between the other human and the computer. The Turing test has been used in its essence for more than two decades as a model for current ITS development. The main ideal for ITS systems is to effectively communicate. As early as the 1950s programs were emerging displaying intelligent features. Turing's work as well as later projects by researchers such as Allen Newell, Clifford Shaw, and Herb Simon showed programs capable of creating logical proofs and theorems. Their program, The Logic Theorist exhibited complex symbol manipulation and even generation of new information without direct human control and is considered by some to be the first AI program. Such breakthroughs would inspire the new field of Artificial Intelligence officially named in 1956 by John McCarthy in 1956 at the Dartmouth Conference. This conference was the first of its kind that was devoted to scientists and research in the field of AI.

The PLATO V CAI terminal in 1981

The latter part of the 1960s and 1970s saw many new CAI (Computer-Assisted instruction) projects that built on advances in computer science. The creation of the ALGOL programming language in 1958 enabled many schools and universities to begin developing Computer Assisted Instruction (CAI) programs. Major computer vendors and federal agencies in the US such as IBM, HP, and the National Science Foundation funded the development of these projects. Early implementations in education focused on programmed instruction (PI), a structure based on a computerized input-output system. Although many supported this form of instruction, there was limited evidence supporting its effectiveness. The programming language LOGO was created in 1967 by Wally Feurzeig, Cynthia Solomon, and Seymour Papert as a language streamlined for education. PLATO, an educational terminal featuring displays, animations, and touch controls that could store and deliver large amounts of course material, was developed by Donald Bitzer in the University of Illinois in the early 1970s. Along with these, many other CAI projects were initiated in many countries including the US, the UK, and Canada.

At the same time that CAI was gaining interest, Jaime Carbonell suggested that computers could act as a teacher rather than just a tool (Carbonell, 1970). A new perspective would emerge that focused on the use of computers to intelligently coach students called Intelligent Computer Assisted Instruction or Intelligent Tutoring Systems (ITS). Where CAI used a behaviourist perspective on learning based on Skinner's theories (Dede & Swigger, 1988), ITS drew from work in cognitive psychology, computer science, and especially artificial intelligence. There was a shift in AI research at this time as systems moved from the logic focus of the previous decade to knowledge based systems—systems could make intelligent decisions based on prior knowledge (Buchanan, 2006). Such a program was created by Seymour Papert and Ira Goldstein who created Dendral, a system that predicted possible chemical structures from existing data. Further work began to showcase analogical reasoning and language processing. These changes with a focus on knowledge had big implications for how computers could be used in instruction. The technical requirements of ITS, however, proved to be higher and more complex than CAI systems and ITS systems would find limited success at this time.

Towards the latter part of the 1970s interest in CAI technologies began to wane. Computers were still expensive and not as available as expected. Developers and instructors were reacting negatively to the high cost of developing CAI programs, the inadequate provision for instructor training, and the lack of resources.

Microcomputers and intelligent systems

The microcomputer revolution in the late 1970s and early 1980s helped to revive CAI development and jumpstart development of ITS systems. Personal computers such as the Apple 2, Commodore PET, and TRS-80 reduced the resources required to own computers and by 1981, 50% of US schools were using computers (Chambers & Sprecher, 1983). Several CAI projects utilized the Apple 2 as a system to deliver CAI programs in high schools and universities including the British Columbia Project and California State University Project in 1981.

The early 1980s would also see Intelligent Computer-Assisted Instruction (ICAI) and ITS goals diverge from their roots in CAI. As CAI became increasingly focused on deeper interactions with content created for a specific area of interest, ITS sought to create systems that focused on knowledge of the task and the ability to generalize that knowledge in non-specific ways (Larkin & Chabay, 1992). The key goals set out for ITS were to be able to teach a task as well as perform it, adapting dynamically to its situation. In the transition from CAI to ICAI systems, the computer would have to distinguish not only between the correct and incorrect response but the type of incorrect response to adjust the type of instruction. Research in Artificial Intelligence and Cognitive Psychology fueled the new principles of ITS. Psychologists considered how a computer could solve problems and perform 'intelligent' activities. An ITS programme would have to be able to represent, store and retrieve knowledge and even search its own database to derive its own new knowledge to respond to learner's questions. Basically, early specifications for ITS or (ICAI) require it to "diagnose errors and tailor remediation based on the diagnosis" (Shute & Psotka, 1994, p. 9). The idea of diagnosis and remediation is still in use today when programming ITS.

A key breakthrough in ITS research was the creation of The LISP Tutor, a program that implemented ITS principles in a practical way and showed promising effects increasing student performance. The LISP Tutor was developed and researched in 1983 as an ITS system for teaching students the LISP programming language (Corbett & Anderson, 1992). The LISP Tutor could identify mistakes and provide constructive feedback to students while they were performing the exercise. The system was found to decrease the time required to complete the exercises while improving student test scores (Corbett & Anderson, 1992). Other ITS systems beginning to develop around this time include TUTOR created by Logica in 1984 as a general instructional tool and PARNASSUS created in Carnegie Mellon University in 1989 for language instruction.

Modern ITS

After the implementation of initial ITS, more researchers created a number of ITS for different students. In the late 20th century, Intelligent Tutoring Tools (ITTs) was developed by the Byzantium project, which involved six universities. The ITTs were general purpose tutoring system builders and many institutions had positive feedback while using them. (Kinshuk, 1996) This builder, ITT, would produce an Intelligent Tutoring Applet (ITA) for different subject areas. Different teachers created the ITAs and built up a large inventory of knowledge that was accessible by others through the Internet. Once an ITS was created, teachers could copy it and modify it for future use. This system was efficient and flexible. However, Kinshuk and Patel believed that the ITS was not designed from an educational point of view and was not developed based on the actual needs of students and teachers (Kinshuk and Patel, 1997).

 Recent work has employed ethnographic and design research methods to examine the ways ITSs are actually used by students and teachers across a range of contexts, often revealing unanticipated needs that they meet, fail to meet, or in some cases, even create.

Modern day ITSs typically try to replicate the role of a teacher or a teaching assistant, and increasingly automate pedagogical functions such as problem generation, problem selection, and feedback generation. However, given a current shift towards blended learning models, recent work on ITSs has begun focusing on ways these systems can effectively leverage the complementary strengths of human-led instruction from a teacher or peer, when used in co-located classrooms or other social contexts.

There were three ITS projects that functioned based on conversational dialogue: AutoTutor, Atlas (Freedman, 1999), and Why2. The idea behind these projects was that since students learn best by constructing knowledge themselves, the programs would begin with leading questions for the students and would give out answers as a last resort. AutoTutor's students focused on answering questions about computer technology, Atlas's students focused on solving quantitative problems, and Why2's students focused on explaining physical systems qualitatively. (Graesser, VanLehn, and others, 2001) Other similar tutoring systems such as Andes (Gertner, Conati, and VanLehn, 1998) tend to provide hints and immediate feedback for students when students have trouble answering the questions. They could guess their answers and have correct answers without deep understanding of the concepts. Research was done with a small group of students using Atlas and Andes respectively. The results showed that students using Atlas made significant improvements compared with students who used Andes. However, since the above systems require analysis of students' dialogues, improvement is yet to be made so that more complicated dialogues can be managed.

Structure

Intelligent tutoring systems (ITSs) consist of four basic components based on a general consensus amongst researchers (Nwana,1990; Freedman, 2000; Nkambou et al., 2010):

  1. The Domain model
  2. The Student model
  3. The Tutoring model, and
  4. The User interface model

The domain model (also known as the cognitive model or expert knowledge model) is built on a theory of learning, such as the ACT-R theory which tries to take into account all the possible steps required to solve a problem. More specifically, this model "contains the concepts, rules, and problem-solving strategies of the domain to be learned. It can fulfill several roles: as a source of expert knowledge, a standard for evaluating the student's performance or for detecting errors, etc." (Nkambou et al., 2010, p. 4). Another approach for developing domain models is based on Stellan Ohlsson's Theory of Learning from performance errors, known as constraint-based modelling (CBM). In this case, the domain model is presented as a set of constraints on correct solutions.

The student model can be thought of as an overlay on the domain model. It is considered as the core component of an ITS paying special attention to student's cognitive and affective states and their evolution as the learning process advances. As the student works step-by-step through their problem solving process, an ITS engages in a process called model tracing. Anytime the student model deviates from the domain model, the system identifies, or flags, that an error has occurred. On the other hand, in constraint-based tutors the student model is represented as an overlay on the constraint set. Constraint-based tutors evaluate the student's solution against the constraint set, and identify satisfied and violated constraints. If there are any violated constraints, the student's solution is incorrect, and the ITS provides feedback on those constraints. Constraint-based tutors provide negative feedback (i.e. feedback on errors) and also positive feedback.

The tutor model accepts information from the domain and student models and makes choices about tutoring strategies and actions. At any point in the problem-solving process the learner may request guidance on what to do next, relative to their current location in the model. In addition, the system recognizes when the learner has deviated from the production rules of the model and provides timely feedback for the learner, resulting in a shorter period of time to reach proficiency with the targeted skills. The tutor model may contain several hundred production rules that can be said to exist in one of two states, learned or unlearned. Every time a student successfully applies a rule to a problem, the system updates a probability estimate that the student has learned the rule. The system continues to drill students on exercises that require effective application of a rule until the probability that the rule has been learned reaches at least 95% probability.

Knowledge tracing tracks the learner's progress from problem to problem and builds a profile of strengths and weaknesses relative to the production rules. The cognitive tutoring system developed by John Anderson at Carnegie Mellon University presents information from knowledge tracing as a skillometer, a visual graph of the learner's success in each of the monitored skills related to solving algebra problems. When a learner requests a hint, or an error is flagged, the knowledge tracing data and the skillometer are updated in real-time.

The user interface component "integrates three types of information that are needed in carrying out a dialogue: knowledge about patterns of interpretation (to understand a speaker) and action (to generate utterances) within dialogues; domain knowledge needed for communicating content; and knowledge needed for communicating intent" (Padayachee, 2002, p. 3).

Nkambou et al. (2010) make mention of Nwana's (1990) review of different architectures underlining a strong link between architecture and paradigm (or philosophy). Nwana (1990) declares, "[I]t is almost a rarity to find two ITSs based on the same architecture [which] results from the experimental nature of the work in the area" (p. 258). He further explains that differing tutoring philosophies emphasize different components of the learning process (i.e., domain, student or tutor). The architectural design of an ITS reflects this emphasis, and this leads to a variety of architectures, none of which, individually, can support all tutoring strategies (Nwana, 1990, as cited in Nkambou et al., 2010). Moreover, ITS projects may vary according to the relative level of intelligence of the components. As an example, a project highlighting intelligence in the domain model may generate solutions to complex and novel problems so that students can always have new problems to work on, but it might only have simple methods for teaching those problems, while a system that concentrates on multiple or novel ways of teaching a particular topic might find a less sophisticated representation of that content sufficient.

Design and development methods

Apart from the discrepancy amongst ITS architectures each emphasizing different elements, the development of an ITS is much the same as any instructional design process. Corbett et al. (1997) summarized ITS design and development as consisting of four iterative stages: (1) needs assessment, (2) cognitive task analysis, (3) initial tutor implementation and (4) evaluation.

The first stage known as needs assessment is common to any instructional design process, especially software development. This involves a learner analysis, consultation with subject matter experts and/or the instructor(s). This first step is part of the development of the expert/knowledge and student domain. The goal is to specify learning goals and to outline a general plan for the curriculum; it is imperative not to computerize traditional concepts but develop a new curriculum structure by defining the task in general and understanding learners' possible behaviours dealing with the task and to a lesser degree the tutor's behavior. In doing so, three crucial dimensions need to be dealt with: (1) the probability a student is able to solve problems; (2) the time it takes to reach this performance level and (3) the probability the student will actively use this knowledge in the future. Another important aspect that requires analysis is cost effectiveness of the interface. Moreover, teachers and student entry characteristics such as prior knowledge must be assessed since both groups are going to be system users.[41]

The second stage, cognitive task analysis, is a detailed approach to expert systems programming with the goal of developing a valid computational model of the required problem solving knowledge. Chief methods for developing a domain model include: (1) interviewing domain experts, (2) conducting "think aloud" protocol studies with domain experts, (3) conducting "think aloud" studies with novices and (4) observation of teaching and learning behavior. Although the first method is most commonly used, experts are usually incapable of reporting cognitive components. The "think aloud" methods, in which the experts is asked to report aloud what s/he is thinking when solving typical problems, can avoid this problem. Observation of actual online interactions between tutors and students provides information related to the processes used in problem-solving, which is useful for building dialogue or interactivity into tutoring systems.

The third stage, initial tutor implementation, involves setting up a problem solving environment to enable and support an authentic learning process. This stage is followed by a series of evaluation activities as the final stage which is again similar to any software development project.

The fourth stage, evaluation includes (1) pilot studies to confirm basic usability and educational impact; (2) formative evaluations of the system under development, including (3) parametric studies that examine the effectiveness of system features and finally, (4) summative evaluations of the final tutor's effect: learning rate and asymptotic achievement levels.

A variety of authoring tools have been developed to support this process and create intelligent tutors, including ASPIRE, the Cognitive Tutor Authoring Tools (CTAT), GIFT, ASSISTments Builder and AutoTutor tools. The goal of most of these authoring tools is to simplify the tutor development process, making it possible for people with less expertise than professional AI programmers to develop Intelligent Tutoring Systems.

Eight principles of ITS design and development

Anderson et al. (1987) outlined eight principles for intelligent tutor design and Corbett et al. (1997) later elaborated on those principles highlighting an all-embracing principle which they believed governed intelligent tutor design, they referred to this principle as:

Principle 0: An intelligent tutor system should enable the student to work to the successful conclusion of problem solving.

  1. Represent student competence as a production set.
  2. Communicate the goal structure underlying the problem solving.
  3. Provide instruction in the problem solving context.
  4. Promote an abstract understanding of the problem-solving knowledge.
  5. Minimize working memory load.
  6. Provide immediate feedback on errors.
  7. Adjust the grain size of instruction with learning.
  8. Facilitate successive approximations to the target skill.

Use in practice

All this is a substantial amount of work, even if authoring tools have become available to ease the task. This means that building an ITS is an option only in situations in which they, in spite of their relatively high development costs, still reduce the overall costs through reducing the need for human instructors or sufficiently boosting overall productivity. Such situations occur when large groups need to be tutored simultaneously or many replicated tutoring efforts are needed. Cases in point are technical training situations such as training of military recruits and high school mathematics. One specific type of intelligent tutoring system, the Cognitive Tutor, has been incorporated into mathematics curricula in a substantial number of United States high schools, producing improved student learning outcomes on final exams and standardized tests. Intelligent tutoring systems have been constructed to help students learn geography, circuits, medical diagnosis, computer programming, mathematics, physics, genetics, chemistry, etc. Intelligent Language Tutoring Systems (ILTS), e.g. this one, teach natural language to first or second language learners. ILTS requires specialized natural language processing tools such as large dictionaries and morphological and grammatical analyzers with acceptable coverage.

Applications

During the rapid expansion of the web boom, new computer-aided instruction paradigms, such as e-learning and distributed learning, provided an excellent platform for ITS ideas. Areas that have used ITS include natural language processing, machine learning, planning, multi-agent systems, ontologies, semantic Web, and social and emotional computing. In addition, other technologies such as multimedia, object-oriented systems, modeling, simulation, and statistics have also been connected to or combined with ITS. Historically non-technological areas such as the educational sciences and psychology have also been influenced by the success of ITS.

In recent years, ITS has begun to move away from the search-based to include a range of practical applications. ITS have expanded across many critical and complex cognitive domains, and the results have been far reaching. ITS systems have cemented a place within formal education and these systems have found homes in the sphere of corporate training and organizational learning. ITS offers learners several affordances such as individualized learning, just in time feedback, and flexibility in time and space.

While Intelligent tutoring systems evolved from research in cognitive psychology and artificial intelligence, there are now many applications found in education and in organizations. Intelligent tutoring systems can be found in online environments or in a traditional classroom computer lab, and are used in K-12 classrooms as well as in universities. There are a number of programs that target mathematics but applications can be found in health sciences, language acquisition, and other areas of formalized learning.

Reports of improvement in student comprehension, engagement, attitude, motivation, and academic results have all contributed to the ongoing interest in the investment in and research of theses systems. The personalized nature of the intelligent tutoring systems affords educators the opportunity to create individualized programs. Within education there are a plethora of intelligent tutoring systems, an exhaustive list does not exist but several of the more influential programs are listed below.

Education

Algebra Tutor PAT (PUMP Algebra Tutor or Practical Algebra Tutor) developed by the Pittsburgh Advanced Cognitive Tutor Center at Carnegie Mellon University, engages students in anchored learning problems and uses modern algebraic tools in order to engage students in problem solving and in sharing of their results. The aim of PAT is to tap into a students' prior knowledge and everyday experiences with mathematics in order to promote growth. The success of PAT is well documented (ex. Miami-Dade County Public Schools Office of Evaluation and Research) from both a statistical (student results) and emotional (student and instructor feedback) perspective.

SQL-Tutor is the first ever constraint-based tutor developed by the Intelligent Computer Tutoring Group (ICTG) at the University of Canterbury, New Zealand. SQL-Tutor teaches students how to retrieve data from databases using the SQL SELECT statement.

EER-Tutor is a constraint-based tutor (developed by ICTG) that teaches conceptual database design using the Entity Relationship model. An earlier version of EER-Tutor was KERMIT, a stand-alone tutor for ER modelling, whjich was shown to results in significant improvement of student's knowledge after one hour of learning (with the effect size of 0.6).

COLLECT-UML is a constraint-based tutor that supports pairs of students working collaboratively on UML class diagrams. The tutor provides feedback on the domain level as well as on collaboration.

StoichTutor is a web-based intelligent tutor that helps high school students learn chemistry, specifically the sub-area of chemistry known as stoichiometry. It has been used to explore a variety of learning science principles and techniques, such as worked examples and politeness.

Mathematics Tutor The Mathematics Tutor (Beal, Beck & Woolf, 1998) helps students solve word problems using fractions, decimals and percentages. The tutor records the success rates while a student is working on problems while providing subsequent, lever-appropriate problems for the student to work on. The subsequent problems that are selected are based on student ability and a desirable time in is estimated in which the student is to solve the problem.

eTeacher eTeacher (Schiaffino et al., 2008) is an intelligent agent or pedagogical agent, that supports personalized e-learning assistance. It builds student profiles while observing student performance in online courses. eTeacher then uses the information from the student's performance to suggest a personalized courses of action designed to assist their learning process.

ZOSMAT ZOSMAT was designed to address all the needs of a real classroom. It follows and guides a student in different stages of their learning process. This is a student-centered ITS does this by recording the progress in a student's learning and the student program changes based on the student's effort. ZOSMAT can be used for either individual learning or in a real classroom environment alongside the guidance of a human tutor.

REALP REALP was designed to help students enhance their reading comprehension by providing reader-specific lexical practice and offering personalized practice with useful, authentic reading materials gathered from the Web. The system automatically build a user model according to student's performance. After reading, the student is given a series of exercises based on the target vocabulary found in reading.

CIRCSlM-Tutor CIRCSIM_Tutor is an intelligent tutoring system that is used with first year medical students at the Illinois Institute of Technology. It uses natural dialogue based, Socratic language to help students learn about regulating blood pressure.

Why2-Atlas Why2-Atlas is an ITS that analyses students explanations of physics principles. The students input their work in paragraph form and the program converts their words into a proof by making assumptions of student beliefs that are based on their explanations. In doing this, misconceptions and incomplete explanations are highlighted. The system then addresses these issues through a dialogue with the student and asks the student to correct their essay. A number of iterations may take place before the process is complete.

SmartTutor The University of Hong Kong (HKU) developed a SmartTutor to support the needs of continuing education students. Personalized learning was identified as a key need within adult education at HKU and SmartTutor aims to fill that need. SmartTutor provides support for students by combining Internet technology, educational research and artificial intelligence.

AutoTutor AutoTutor assists college students in learning about computer hardware, operating systems and the Internet in an introductory computer literacy course by simulating the discourse patterns and pedagogical strategies of a human tutor. AutoTutor attempts to understand learner's input from the keyboard and then formulate dialog moves with feedback, prompts, correction and hints.

ActiveMath ActiveMath is a web-based, adaptive learning environment for mathematics. This system strives for improving long-distance learning, for complementing traditional classroom teaching, and for supporting individual and lifelong learning.

ESC101-ITS The Indian Institute of Technology, Kanpur, India developed the ESC101-ITS, an intelligent tutoring system for introductory programming problems.

AdaptErrEx is an adaptive intelligent tutor that uses interactive erroneous examples to help students learn decimal arithmetic.

Corporate training and industry

Generalized Intelligent Framework for Tutoring (GIFT) is an educational software designed for creation of computer-based tutoring systems. Developed by the U.S. Army Research Laboratory from 2009 to 2011, GIFT was released for commercial use in May 2012. GIFT is open-source and domain independent, and can be downloaded online for free. The software allows an instructor to design a tutoring program that can cover various disciplines through adjustments to existing courses. It includes coursework tools intended for use by researchers, instructional designers, instructors, and students. GIFT is compatible with other teaching materials, such as PowerPoint presentations, which can be integrated into the program.

SHERLOCK "SHERLOCK" is used to train Air Force technicians to diagnose problems in the electrical systems of F-15 jets. The ITS creates faulty schematic diagrams of systems for the trainee to locate and diagnose. The ITS provides diagnostic readings allowing the trainee to decide whether the fault lies in the circuit being tested or if it lies elsewhere in the system. Feedback and guidance are provided by the system and help is available if requested.

Cardiac Tutor The Cardiac Tutor's aim is to support advanced cardiac support techniques to medical personnel. The tutor presents cardiac problems and, using a variety of steps, students must select various interventions. Cardiac Tutor provides clues, verbal advice, and feedback in order to personalize and optimize the learning. Each simulation, regardless of whether the students were successfully able to help their patients, results in a detailed report which students then review.

CODES Cooperative Music Prototype Design is a Web-based environment for cooperative music prototyping. It was designed to support users, especially those who are not specialists in music, in creating musical pieces in a prototyping manner. The musical examples (prototypes) can be repeatedly tested, played and modified. One of the main aspects of CODES is interaction and cooperation between the music creators and their partners.

Effectiveness

Assessing the effectiveness of ITS programs is problematic. ITS vary greatly in design, implementation, and educational focus. When ITS are used in a classroom, the system is not only used by students, but by teachers as well. This usage can create barriers to effective evaluation for a number of reasons; most notably due to teacher intervention in student learning.

Teachers often have the ability to enter new problems into the system or adjust the curriculum. In addition, teachers and peers often interact with students while they learn with ITSs (e.g., during an individual computer lab session or during classroom lectures falling in between lab sessions) in ways that may influence their learning with the software. Prior work suggests that the vast majority of students' help-seeking behavior in classrooms using ITSs may occur entirely outside of the software - meaning that the nature and quality of peer and teacher feedback in a given class may be an important mediator of student learning in these contexts. In addition, aspects of classroom climate, such as students' overall level of comfort in publicly asking for help, or the degree to which a teacher is physically active in monitoring individual students may add additional sources of variation across evaluation contexts. All of these variables make evaluation of an ITS complex, and may help explain variation in results across evaluation studies.

Despite the inherent complexities, numerous studies have attempted to measure the overall effectiveness of ITS, often by comparisons of ITS to human tutors. Reviews of early ITS systems (1995) showed an effect size of d = 1.0 in comparison to no tutoring, where as human tutors were given an effect size of d = 2.0. Kurt VanLehn's much more recent overview (2011) of modern ITS found that there was no statistical difference in effect size between expert one-on-one human tutors and step-based ITS. Some individual ITS have been evaluated more positively than others. Studies of the Algebra Cognitive Tutor found that the ITS students outperformed students taught by a classroom teacher on standardized test problems and real-world problem solving tasks. Subsequent studies found that these results were particularly pronounced in students from special education, non-native English, and low-income backgrounds.

A more recent meta-analysis suggests that ITSs can exceed the effectiveness of both CAI and human tutors, especially when measured by local (specific) tests as opposed to standardized tests. "Students who received intelligent tutoring outperformed students from conventional classes in 46 (or 92%) of the 50 controlled evaluations, and the improvement in performance was great enough to be considered of substantive importance in 39 (or 78%) of the 50 studies. The median ES in the 50 studies was 0.66, which is considered a moderate-to-large effect for studies in the social sciences. It is roughly equivalent to an improvement in test performance from the 50th to the 75th percentile. This is stronger than typical effects from other forms of tutoring. C.-L. C. Kulik and Kulik’s (1991) meta-analysis, for example, found an average ES of 0.31 in 165 studies of CAI tutoring. ITS gains are about twice as high. The ITS effect is also greater than typical effects from human tutoring. As we have seen, programs of human tutoring typically raise student test scores about 0.4 standard deviations over control levels. Developers of ITSs long ago set out to improve on the success of CAI tutoring and to match the success of human tutoring. Our results suggest that ITS developers have already met both of these goals.... Although effects were moderate to strong in evaluations that measured outcomes on locally developed tests, they were much smaller in evaluations that measured outcomes on standardized tests. Average ES on studies with local tests was 0.73; average ES on studies with standardized tests was 0.13. This discrepancy is not unusual for meta-analyses that include both local and standardized tests... local tests are likely to align well with the objectives of specific instructional programs. Off-the-shelf standardized tests provide a looser fit. ... Our own belief is that both local and standardized tests provide important information about instructional effectiveness, and when possible, both types of tests should be included in evaluation studies."

Some recognized strengths of ITS are their ability to provide immediate yes/no feedback, individual task selection, on-demand hints, and support mastery learning.

Limitations

Intelligent tutoring systems are expensive both to develop and implement. The research phase paves the way for the development of systems that are commercially viable. However, the research phase is often expensive; it requires the cooperation and input of subject matter experts, the cooperation and support of individuals across both organizations and organizational levels. Another limitation in the development phase is the conceptualization and the development of software within both budget and time constraints. There are also factors that limit the incorporation of intelligent tutors into the real world, including the long timeframe required for development and the high cost of the creation of the system components. A high portion of that cost is a result of content component building. For instance, surveys revealed that encoding an hour of online instruction time took 300 hours of development time for tutoring content. Similarly, building the Cognitive Tutor took a ratio of development time to instruction time of at least 200:1 hours. The high cost of development often eclipses replicating the efforts for real world application. Intelligent tutoring systems are not, in general, commercially feasible for real-world applications.

A criticism of Intelligent Tutoring Systems currently in use, is the pedagogy of immediate feedback and hint sequences that are built in to make the system "intelligent". This pedagogy is criticized for its failure to develop deep learning in students. When students are given control over the ability to receive hints, the learning response created is negative. Some students immediately turn to the hints before attempting to solve the problem or complete the task. When it is possible to do so, some students bottom out the hints – receiving as many hints as possible as fast as possible – in order to complete the task faster. If students fail to reflect on the tutoring system's feedback or hints, and instead increase guessing until positive feedback is garnered, the student is, in effect, learning to do the right thing for the wrong reasons. Most tutoring systems are currently unable to detect shallow learning, or to distinguish between productive versus unproductive struggle (though see, e.g.). For these and many other reasons (e.g., overfitting of underlying models to particular user populations), the effectiveness of these systems may differ significantly across users.

Another criticism of intelligent tutoring systems is the failure of the system to ask questions of the students to explain their actions. If the student is not learning the domain language than it becomes more difficult to gain a deeper understanding, to work collaboratively in groups, and to transfer the domain language to writing. For example, if the student is not "talking science" than it is argued that they are not being immersed in the culture of science, making it difficult to undertake scientific writing or participate in collaborative team efforts. Intelligent tutoring systems have been criticized for being too "instructivist" and removing intrinsic motivation, social learning contexts, and context realism from learning.

Practical concerns, in terms of the inclination of the sponsors/authorities and the users to adapt intelligent tutoring systems, should be taken into account. First, someone must have a willingness to implement the ITS. Additionally an authority must recognize the necessity to integrate an intelligent tutoring software into current curriculum and finally, the sponsor or authority must offer the needed support through the stages of the system development until it is completed and implemented.

Evaluation of an intelligent tutoring system is an important phase; however, it is often difficult, costly, and time consuming.hough there are various evaluation techniques presented in the literature, there are no guiding principles for the selection of appropriate evaluation method(s) to be used in a particular context. Careful inspection should be undertaken to ensure that a complex system does what it claims to do. This assessment may occur during the design and early development of the system to identify problems and to guide modifications (i.e. formative evaluation). In contrast, the evaluation may occur after the completion of the system to support formal claims about the construction, behaviour of, or outcomes associated with a completed system (i.e. summative evaluation). The great challenge introduced by the lack of evaluation standards resulted in neglecting the evaluation stage in several existing ITS'.

Improvements

Intelligent tutoring systems are less capable than human tutors in the areas of dialogue and feedback. For example, human tutors are able to interpret the affective state of the student, and potentially adapt instruction in response to these perceptions. Recent work is exploring potential strategies for overcoming these limitations of ITSs, to make them more effective.

Dialogue

Human tutors have the ability to understand a person's tone and inflection within a dialogue and interpret this to provide continual feedback through an ongoing dialogue. Intelligent tutoring systems are now being developed to attempt to simulate natural conversations. To get the full experience of dialogue there are many different areas in which a computer must be programmed; including being able to understand tone, inflection, body language, and facial expression and then to respond to these. Dialogue in an ITS can be used to ask specific questions to help guide students and elicit information while allowing students to construct their own knowledge. The development of more sophisticated dialogue within an ITS has been a focus in some current research partially to address the limitations and create a more constructivist approach to ITS. In addition, some current research has focused on modeling the nature and effects of various social cues commonly employed within a dialogue by human tutors and tutees, in order to build trust and rapport (which have been shown to have positive impacts on student learning).

Emotional affect

A growing body of work is considering the role of affect on learning, with the objective of developing intelligent tutoring systems that can interpret and adapt to the different emotional states. Humans do not just use cognitive processes in learning but the affective processes they go through also plays an important role. For example, learners learn better when they have a certain level of disequilibrium (frustration), but not enough to make the learner feel completely overwhelmed. This has motivated affective computing to begin to produce and research creating intelligent tutoring systems that can interpret the affective process of an individual. An ITS can be developed to read an individual's expressions and other signs of affect in an attempt to find and tutor to the optimal affective state for learning. There are many complications in doing this since affect is not expressed in just one way but in multiple ways so that for an ITS to be effective in interpreting affective states it may require a multimodal approach (tone, facial expression, etc...). These ideas have created a new field within ITS, that of Affective Tutoring Systems (ATS). One example of an ITS that addresses affect is Gaze Tutor which was developed to track students eye movements and determine whether they are bored or distracted and then the system attempts to reengage the student.

Rapport Building

To date, most ITSs have focused purely on the cognitive aspects of tutoring and not on the social relationship between the tutoring system and the student. As demonstrated by the Computers are social actors paradigm humans often project social heuristics onto computers. For example in observations of young children interacting with Sam the CastleMate, a collaborative story telling agent, children interacted with this simulated child in much the same manner as they would a human child. It has been suggested that to effectively design an ITS that builds rapport with students, the ITS should mimic strategies of instructional immediacy, behaviors which bridge the apparent social distance between students and teachers such as smiling and addressing students by name. With regard to teenagers, Ogan et. al draw from observations of close friends tutoring each other to argue that in order for an ITS to build rapport as a peer to a student, a more involved process of trust building is likely necessary which may ultimately require that the tutoring system possess the capability to effectively respond to and even produce seemingly rude behavior in order to mediate motivational and affective student factors through playful joking and taunting.

Teachable Agents

Traditionally ITSs take on the role of autonomous tutors, however they can also take on the role of tutees for the purpose of learning by teaching exercises. Evidence suggests that learning by teaching can be an effective strategy for mediating self-explanation, improving feelings of self-efficacy, and boosting educational outcomes and retention. In order to replicate this effect the roles of the student and ITS can be switched. This can be achieved by designing the ITS to have the appearance of being taught as is the case in the Teachable Agent Arithmetic Game  and Betty's Brain. Another approach is to have students teach a machine learning agent which can learn to solve problems by demonstration and correctness feedback as is the case in the APLUS system built with SimStudent.  In order to replicate the educational effects of learning by teaching teachable agents generally have a social agent built on top of them which poses questions or conveys confusion. For example Betty from Betty's Brain will prompt the student to ask her questions to make sure that she understands the material, and Stacy from APLUS will prompt the user for explanations of the feedback provided by the student.

Related conferences

Several conferences regularly consider papers on intelligent tutoring systems. The oldest is The International Conference on Intelligent Tutoring Systems, which started in 1988 and is now held every other year. The International Artificial Intelligence in Education (AIED) Society publishes The International Journal of Artificial Intelligence in Education (IJAIED) and organizes the annual International Conference on Artificial Intelligence in Education (http://iaied.org/conf/1/) started in 1989. Many papers on intelligent tutoring systems also appear at International Conference on User Modeling, Adaptation, and Personalization, and International Conference on Educational Data Mining. The American Association of Artificial Intelligence (AAAI) will sometimes have symposia and papers related to intelligent tutoring systems. A number of books have been written on ITS including three published by Lawrence Erlbaum Associates.

 

Technology integration

From Wikipedia, the free encyclopedia
Technology integration is the use of technology tools in general content areas in education in order to allow students to apply computer and technology skills to learning and problem-solving. Generally speaking, the curriculum drives the use of technology and not vice versa. Technology integration is defined as the use of technology to enhance and support the educational environment. Technology integration in the classroom can also support classroom instruction by creating opportunities for students to complete assignments on the computer rather than with normal pencil and paper. In a larger sense, technology integration can also refer to the use of an integration platform and APIs in the management of a school, to integrate disparate SaaS (Software As A Service) applications, databases, and programs used by an educational institution so that their data can be shared in real-time across all systems on campus, thus supporting students' education by improving data quality and access for faculty and staff.

"Curriculum integration with the use of technology involves the infusion of technology as a tool to enhance the learning in a content area or multidisciplinary setting... Effective integration of technology is achieved when students are able to select technology tools to help them obtain information in a timely manner, analyze and synthesize the information, and present it professionally to an authentic audience. The technology should become an integral part of how the classroom functions—as accessible as all other classroom tools. The focus in each lesson or unit is the curriculum outcome, not the technology."

Integrating technology with standard curriculum can not only give students a sense of power, but also allows for more advanced learning among broad topics. However, these technologies require infrastructure, continual maintenance and repair – one determining element, among many, in how these technologies can be used for curricula purposes and whether or not they will be successful. Examples of the infrastructure required to operate and support technology integration in schools include at the basic level electricity, Internet service providers, routers, modems, and personnel to maintain the network, beyond the initial cost of the hardware and software.

Standard education curriculum with an integration of technology can provide tools for advanced learning among a broad range of topics. Integration of information and communication technology is often closely monitored and evaluated due to the current climate of accountability, outcome-based education, and standardization in assessment.

Technology integration can in some instances be problematic. A high ratio of students to technological device has been shown to impede or slow learning and task completion. In some, instances dyadic peer interaction centered on integrated technology has proven to develop a more cooperative sense of social relations. Success or failure of technology integration is largely dependent on factors beyond the technology. The availability of appropriate software for the technology being integrated is also problematic in terms of software accessibility to students and educators. Another issue identified with technology integration is the lack of long-range planning for these tools within the educative districts they are being used.

Technology contributes to global development and diversity in classrooms while helping to develop upon the fundamental building blocks needed for students to achieve more complex ideas. In order for technology to make an impact within the educational system, teachers and students must access to technology in a contextual matter that is culturally relevant, responsive and meaningful to their educational practice and that promotes quality teaching and active student learning.

History

The term 'educational technology' was used during the post World War II era in the United States for the integration of implements such as film strips, slide projectors, language laboratories, audio tapes, and television. Presently, the computers, tablets, and mobile devices integrated into classroom settings for educational purposes are most often referred to as 'current' educational technologies. It is important to note that educational technologies continually change, and once referred to slate chalkboards used by students in early schoolhouses in the late nineteenth and early twentieth centuries. The phrase 'educational technology', a composite meaning of technology + education, is used to refer to the most advanced technologies that are available for both teaching and learning in a particular era.

In 1994 federal legislation for both the Educate America Act and the Improving America's School's Act (IASA) authorized funds for state and federal educational technology planning. One of the principal goals listed in the Educate America Act is to promote the research, consensus building, and systemic changes needed to ensure equitable educational opportunities and high levels of educational achievement for all students (Public Law 103-227). In 1996 the Telecommunications Act provided a systematic change necessary to ensure equitable educational opportunities of bringing new technology into the education sector. The Telecomm Act requires affordable access and service to advanced telecom services for public schools and libraries. Many of the computers, tablets, and mobile devices currently used in classrooms operate through Internet connectivity; particularly those that are application based such as tablets. Schools in high-cost areas and disadvantaged schools were to receive higher discounts in telecom services such as Internet, cable, satellite television, and the management component.

A chart of "Technology Penetration in U.S. Public Schools" report states 98% percent of schools reported having computers in the 1995–1996 school year, with 64% Internet access, and 38% working via networked systems. The ratio of students to computers in the United States in 1984 stood at 15 students per 1 computer, it now stands at an average all-time low of 10 students to computer. From the 1980s on into the 2000s, the most substantial issue to examine in educational technology was school access to technologies according to the 1997 Policy Information Report for Computers and Classrooms: The Status of Technology in U.S. Schools. These technologies included computers, multimedia computers, the Internet, networks, cable TV, and satellite technology among other technology-based resources.

More recently ubiquitous computing devices, such as computers and tablets, are being used as networked collaborative technologies in the classroom. Computers, tablets and mobile devices may be used in educational settings within groups, between people and for collaborative tasks. These devices provide teachers and students access to the World Wide Web in addition to a variety of software applications.

Technology education standards

National Educational Technology Standards (NETS) served as a roadmap since 1998 for improved teaching and learning by educators. As stated above, these standards are used by teachers, students, and administrators to measure competency and set higher goals to be skillful.

The Partnership for 21st Century Skills is a national organization that advocates for 21st century readiness for every student. Their most recent Technology Plan was released in 2010, "Transforming American Education: Learning Powered by Technology". This plan outlines a vision "to leverage the learning sciences and modern technology to create engaging, relevant, and personalized learning experiences for all learners that mirror students' daily lives and the reality of their futures. In contrast to traditional classroom instruction, this requires that students be put at the center and encouraged to take control of their own learning by providing flexibility on several dimensions." Although tools have changed dramatically since the beginnings of educational technology, this vision of using technology for empowered, self-directed learning has remained consistent.

Pedagogy

The integration of electronic devices into classrooms has been cited as a possible solution to bridge access for students, to close achievement gaps, that are subject to the digital divide, based on social class, economic inequality, or gender where and a potential user does not have enough cultural capital required to have access to information and communication technologies. Several motivations or arguments have been cited for integrating high-tech hardware and software into school, such as (1) making schools more efficient and productive than they currently are, (2) if this goal is achieved, teaching and learning will be transformed into an engaging and active process connected to real life, and (3) is to prepare the current generation of young people for the future workplace. The computer has access to graphics and other functions students can use to express their creativity. Technology integration does not always have to do with the computer. It can be the use of the overhead projector, student response clickers, etc. Enhancing how the student learns is very important in technology integration. Technology will always help students to learn and explore more.

Paradigms

Most research in technology integration has been criticized for being atheoretical and ad hoc driven more by the affordances of the technology rather than the demands of pedagogy and subject matter. Armstrong (2012) argued that multimedia transmission turns to limit the learning into simple content, because it is difficult to deliver complicated content through multimedia.

One approach that attempts to address this concern is a framework aimed at describing the nature of teacher knowledge for successful technology integration. The technological pedagogical content knowledge or TPACK framework has recently received some positive attention.

Another model that has been used to analyze tech integration is the SAMR framework, developed by Ruben Puentedura. This model attempts to measure the level of tech integration with 4 the levels that go from Enhancement to Transformation: Substitution, Augmentation, Modification, Redefinition.

Constructivism

Constructivism is a crucial component of technology integration. It is a learning theory that describes the process of students constructing their own knowledge through collaboration and inquiry-based learning. According to this theory, students learn more deeply and retain information longer when they have a say in what and how they will learn. Inquiry-based learning, thus, is researching a question that is personally relevant and purposeful because of its direct correlation to the one investigating the knowledge. As stated by Jean Piaget, constructivist learning is based on four stages of cognitive development. In these stages, children must take an active role in their own learning and produce meaningful works in order to develop a clear understanding. These works are a reflection of the knowledge that has been achieved through active self-guided learning. Students are active leaders in their learning and the learning is student-led rather than teacher–directed.

Many teachers use a constructivist approach in their classrooms assuming one or more of the following roles: facilitator, collaborator, curriculum developer, team member, community builder, educational leader, or information producer.

Counter argument to computers in the classroom

Is technology in the classroom needed, or does it hinder students' social development? We've all seen a table of teenagers on their phones, all texting, not really socializing or talking to each other. How do they develop social and communication skills? Neil Postman (1993) concludes:

The role of the school is to help students learn how to ignore and discard information so that they can achieve a sense of coherence in their lives; to help students cultivate a sense of social responsibility; to help students think critically, historically, and humanely; to help students understand the ways in which technology shapes their consciousness; to help students learn that their own needs sometimes are subordinate to the needs of the group. I could go on for another three pages in this vein without any reference to how machinery can give students access to information. Instead, let me summarize in two ways what I mean. First, I'll cite a remark made repeatedly by my friend Alan Kay, who is sometimes called "the father of the personal computer." Alan likes to remind us that any problems the schools cannot solve without machines, they cannot solve with them. Second, and with this I shall come to a close: If a nuclear holocaust should occur some place in the world, it will not happen because of insufficient information; if children are starving in Somalia, it's not because of insufficient information; if crime terrorizes our cities, marriages are breaking up, mental disorders are increasing, and children are being abused, none of this happens because of a lack of information. These things happen because we lack something else. It is the "something else" that is now the business of schools.

Tools

Interactive whiteboards

Interactive whiteboards are used in many schools as replacements for standard whiteboards and provide a way to allow students to interact with material on the computer. In addition, some interactive whiteboards software allow teachers to record their instruction.

  • 3D virtual environments are also used with interactive whiteboards as a way for students to interact with 3D virtual learning objects employing kinetics and haptic touch the classroom. An example of the use of this technique is the open-source project Edusim.
  • Research has been carried out to track the worldwide Interactive Whiteboard market by Decision Tree Consulting (DTC), a worldwide research company. According to the results, interactive Whiteboards continue to be the biggest technology revolution in classrooms, across the world there are over 1.2 million boards installed, over 5 million classrooms are forecast to have Interactive Whiteboards installed by 2011, Americas are the biggest region closely followed by EMEA, and Mexico's Enciclomedia project to equip 145,000 classrooms is worth $1.8 billion and is the largest education technology project in the world.
  • Interactive whiteboards can accommodate different learning styles, such as visual, tactile, and audio.

Interactive Whiteboards are another way that technology is expanding in schools. By assisting the teacher to helping students more kinestically as well as finding different ways to process there information throughout the entire classroom.

Student response systems

Student response systems consist of handheld remote control units, or response pads, which are operated by individual students. An infrared or radio frequency receiver attached to the teacher's computer collects the data submitted by students. The CPS (Classroom Performance System), once set, allows the teacher to pose a question to students in several formats. Students then use the response pad to send their answer to the infrared sensor. Data collected from these systems is available to the teacher in real time and can be presented to the students in a graph form on an LCD projector. The teacher can also access a variety of reports to collect and analyze student data. These systems have been used in higher education science courses since the 1970s and have become popular in K-12 classrooms beginning in the early 21st century.

Audience response systems (ARS) can help teachers analyze, and act upon student feedback more efficiently. For example, with polleverywhere.com, students text in answers via mobile devices to warm-up or quiz questions. The class can quickly view collective responses to the multiple-choice questions electronically, allowing the teacher to differentiate instruction and learn where students need help most.

Combining ARS with peer learning via collaborative discussions has also been proven to be particularly effective. When students answer an in-class conceptual question individually, then discuss it with their neighbors, and then vote again on the same or a conceptually similar question, the percentage of correct student responses usually increases, even in groups where no student had given the correct answer previously.

Among other tools that have been noted as being effective as a way of technology integration are podcasts, digital cameras, smart phones, tablets, digital media, and blogs.Other examples of technology integration include translation memories and smart computerized translation programs that are among the newest integrations that are changing the field of linguistics.

Mobile learning

Mobile learning is defined as "learning across multiple contexts, through social and content interactions, using personal electronic devices". A mobile device is essentially any device that is portable and has internet access and includes tablets, smart phones, cell phones, e-book readers, and MP3 players. As mobile devices become increasingly common personal devices of K-12 students, some educators seek to utilize downloadable applications and interactive games to help facilitate learning. This practice can be controversial because many parents and educators are concerned that students would be off-task because teachers cannot monitor their activity. This is currently being troubleshooted by forms of mobile learning that require a log-in, acting as a way to track engagement of students.

Benefits

According to findings from four meta analyses, blending technology with face-to-face teacher time generally produces better outcomes than face-to-face or online learning alone. Research is currently limited on the specific features of technology integration that improve learning. Meanwhile, the marketplace of learning technologies continues to grow and vary widely in content, quality, implementation, and context of use.

Research shows that adding technology to K-12 environments, alone, does not necessarily improve learning. What matters most to implementing mobile learning is how students and teachers use technology to develop knowledge and skills and that requires training. Successful technology integration for learning goes hand in hand with changes in teacher training, curricula, and assessment practices.

An example of teacher professional development is profiled in Edutopia's Schools That Work series on eMints, a program that offers teachers 200 hours of coaching and training in technology integration over a two-year span. In these workshops teachers are trained in practices such as using interactive whiteboards and the latest web tools to facilitate active learning. In a 2010 publication of Learning Point Associates, statistics showed that students of teachers who had participated in eMints had significantly higher standardized test scores than those attained by their peers.

It can keep students focused for longer periods of time. The use of computers to look up information/ data is a tremendous time saver, especially when used to access a comprehensive resource like the Internet to conduct research. This time-saving aspect can keep students focused on a project much longer than they would with books and paper resources, and it helps them develop better learning through exploration and research.

Project-based activities

Definition: Project Based Learning is a teaching method in which students gain knowledge and skills by working for an extended period of time to investigate and respond to an authentic, engaging and complex question, problem, or challenge.

Project Based Activities is a method of teaching where the students gain knowledge and skills by involving themselves for the more period of time to research and respond to the engaging and complex questions, problems, or challenges. the students will work in groups to solve the problems which are challenging. The students will work in groups to solve the problems which are challenging, real, curriculum based and frequently relating to more than one branch of knowledge. Therefore, a well designed project based learning activity is one which addresses different student learning styles and which does not assume that all students can demonstrate their knowledge in a single standard way.

Elements

The project based learning activities involves four basic elements.

  1. An extended time frame.
  2. Collaboration.
  3. Inquiry, investigation and research.
  4. The construction of an artifact or performance of a consequential task.

Examples of activities

CyberHunt

The term "hunt" refers to finding or searching for something. "CyberHunt" means an online activity which learners use the internet as tool to find answers to the question's based upon the topics which are assigned by someone else. Hence learners also can design the CyberHunt on some specific topics. A CyberHunt, or internet scavenger hunt, is a project-based activity which helps students gain experience in exploring and browsing the internet. A CyberHunt may ask students to interact with the site (e.g.: play a game or watch a video), record short answers to teacher questions, as well as read and write about a topic in depth. There are basically two types of CyberHunt:

  • A simple task, in which the teacher develops a series of questions and gives the students a hypertext link to the URL that will give them the answer.
  • A more complex task, intended for increasing and improving student internet search skills. Teachers ask questions for students to answer using a search engine.

WebQuests

It is an inquiry oriented activity in which most or all of the information used by the learners which are drawn out by the internet/web. It is designed to use learner 'time well', to focus on using information rather than on looking for it and to support the learners to think at the level of analysis, synthesis, and evaluation. It is the wonderful way of capturing student's imagination and allowing them to explore in a guided, meaningful manner. It allow the students to explore issues and find their own answers.

There are six building blocks of webQuests:

  1. The introduction – capturing the student's interest.
  2. The task-describing the activities end product.
  3. The resources-web sites, students will use to complete the task.
  4. The evaluation-measuring the result of the activity.
  5. The conclusion-summing up of the activity.

WebQuests are student-centered, web-based curricular units that are interactive and use Internet resources. The purpose of a webQuest is to use information on the web to support the instruction taught in the classroom. A webQuest consists of an introduction, a task (or final project that students complete at the end of the webQuest), processes (or instructional activities), web-based resources, evaluation of learning, reflection about learning, and a conclusion.

WISE

The Web-based Inquiry Science Environment (WISE) provides a platform for creating inquiry science projects for middle school and high school students using evidence and resources from the Web. Funded by the U.S. National Science Foundation, WISE has been developed at the University of California, Berkeley from 1996 until the present. WISE inquiry projects include diverse elements such as online discussions, data collection, drawing, argument creation, resource sharing, concept mapping and other built-in tools, as well as links to relevant web resources. It is the research-focused, open-source inquiry-based learning management system that includes the student- learning environment project authoring environment, grading tool, and tool and user/ course/ content management tools.

Virtual field trip

A virtual field trip is a website that allows the students to experience places, ideas, or objects beyond the constraints of the classroom. A virtual field trip is a great way to allow the students to explore and experience new information. This format is especially helpful and beneficial in allowing schools to keep the cost down. Virtual field trips may also be more practical for children in the younger grades, due to the fact that there is not a demand for chaperones and supervision. Although, a virtual field trip does not allow the children to have the hands on experiences and the social interactions that can and do take place on an actual field trip. An educator should incorporate the use of hands on material to further their understanding of the material that is presented and experienced in a virtual field trip.It is a guided exploration through the www that organizes a collection of pre- screened, its thematically based web pages into a structure online learning experience

ePortfolio

An ePortfolio is a collection of student work that exhibits the student's achievements in one or more areas over time. Components in a typical student ePortfolio might contain creative writings, paintings, photography, math explorations, music, and videos. And it is a collection of work developed across varied contexts over time. The portfolio can advance learning by providing students and/or faculty with a way to organize, archive and display pieces of work.

 

Human–computer interaction

From Wikipedia, the free encyclopedia

Human–computer interaction (HCI) studies the design and use of computer technology, focused on the interfaces between people (users) and computers. Researchers in the field of HCI observe the ways in which humans interact with computers and design technologies that let humans interact with computers in novel ways.

As a field of research, human–computer interaction is situated at the intersection of computer science, behavioural sciences, design, media studies, and several other fields of study. The term was popularized by Stuart K. Card, Allen Newell, and Thomas P. Moran in their seminal 1983 book, The Psychology of Human–Computer Interaction, although the authors first used the term in 1980 and the first known use was in 1975. The term connotes that, unlike other tools with only limited uses (such as a wooden mallet, useful for hitting things, but not much else), a computer has many uses and this takes place as an open-ended dialog between the user and the computer. The notion of dialog likens human–computer interaction to human-to-human interaction, an analogy which is crucial to theoretical considerations in the field.

Introduction

Humans interact with computers in many ways; the interface between humans and computers is crucial to facilitate this interaction. Desktop applications, internet browsers, handheld computers, ERP, and computer kiosks make use of the prevalent graphical user interfaces (GUI) of today. Voice user interfaces (VUI) are used for speech recognition and synthesizing systems, and the emerging multi-modal and Graphical user interfaces (GUI) allow humans to engage with embodied character agents in a way that cannot be achieved with other interface paradigms. The growth in human–computer interaction field has been in quality of interaction, and in different branching in its history. Instead of designing regular interfaces, the different research branches have had a different focus on the concepts of multimodality rather than unimodality, intelligent adaptive interfaces rather than command/action based ones, and finally active rather than passive interfaces.

The Association for Computing Machinery (ACM) defines human–computer interaction as "a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them". An important facet of HCI is user satisfaction (or simply End User Computing Satisfaction). "Because human–computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques in computer graphics, operating systems, programming languages, and development environments are relevant. On the human side, communication theory, graphic and industrial design disciplines, linguistics, social sciences, cognitive psychology, social psychology, and human factors such as computer user satisfaction are relevant. And, of course, engineering and design methods are relevant." Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success. HCI is also sometimes termed human–machine interaction (HMI), man-machine interaction (MMI) or computer-human interaction (CHI).

Poorly designed human-machine interfaces can lead to many unexpected problems. A classic example is the Three Mile Island accident, a nuclear meltdown accident, where investigations concluded that the design of the human-machine interface was at least partly responsible for the disaster. Similarly, accidents in aviation have resulted from manufacturers' decisions to use non-standard flight instruments or throttle quadrant layouts: even though the new designs were proposed to be superior in basic human-machine interaction, pilots had already ingrained the "standard" layout and thus the conceptually good idea actually had undesirable results.

Goals for computers

Human–computer interaction studies the ways in which humans make—or do not make—use of computational artifacts, systems and infrastructures. Much of the research in the field seeks to improve human–computer interaction by improving the usability of computer interfaces. How usability is to be precisely understood, how it relates to other social and cultural values and when it is, and when it may not be a desirable property of computer interfaces is increasingly debated.

Much of the research in the field of human–computer interaction takes an interest in:

  • Methods for designing new computer interfaces, thereby optimizing a design for a desired property such as learnability, findability, efficiency of use.
  • Methods for implementing interfaces, e.g., by means of software libraries.
  • Methods for evaluating and comparing interfaces with respect to their usability and other desirable properties.
  • Methods for studying human computer use and its sociocultural implications more broadly.
  • Methods for determining whether or not the user is human or computer.
  • Models and theories of human computer use as well as conceptual frameworks for the design of computer interfaces, such as cognitivist user models, Activity Theory or ethnomethodological accounts of human computer use.
  • Perspectives that critically reflect upon the values that underlie computational design, computer use and HCI research practice.

Visions of what researchers in the field seek to achieve vary. When pursuing a cognitivist perspective, researchers of HCI may seek to align computer interfaces with the mental model that humans have of their activities. When pursuing a post-cognitivist perspective, researchers of HCI may seek to align computer interfaces with existing social practices or existing sociocultural values.

Researchers in HCI are interested in developing design methodologies, experimenting with devices, prototyping software and hardware systems, exploring interaction paradigms, and developing models and theories of interaction.

Differences with related fields

HCI differs from human factors and ergonomics as HCI focuses more on users working specifically with computers, rather than other kinds of machines or designed artifacts. There is also a focus in HCI on how to implement the computer software and hardware mechanisms to support human–computer interaction. Thus, human factors is a broader term. HCI could be described as the human factors of computers – although some experts try to differentiate these areas.

HCI also differs from human factors in that there is less of a focus on repetitive work-oriented tasks and procedures, and much less emphasis on physical stress and the physical form or industrial design of the user interface, such as keyboards and mouse devices.

Three areas of study have substantial overlap with HCI even as the focus of inquiry shifts. Personal information management (PIM) studies how people acquire and use personal information (computer based and other) to complete tasks. In computer-supported cooperative work (CSCW), emphasis is placed on the use of computing systems in support of the collaborative work. The principles of human interaction management (HIM) extend the scope of CSCW to an organizational level and can be implemented without use of computers.

Design

Principles

The user interacts directly with hardware for the human input and output such as displays, e.g. through a graphical user interface. The user interacts with the computer over this software interface using the given input and output (I/O) hardware.
Software and hardware are matched, so that the processing of the user input is fast enough, and the latency of the computer output is not disruptive to the workflow.

The following experimental design principles are considered, when evaluating a current user interface, or designing a new user interface:

  • Early focus is placed on user(s) and task(s): How many users are needed to perform the task(s) is established and who the appropriate users should be is determined (someone who has never used the interface, and will not use the interface in the future, is most likely not a valid user). In addition, the task(s) the users will be performing and how often the task(s) need to be performed is defined.
  • Empirical measurement: the interface is tested with real users who come in contact with the interface on a daily basis. The results can vary with the performance level of the user and the typical human–computer interaction may not always be represented. Quantitative usability specifics, such as the number of users performing the task(s), the time to complete the task(s), and the number of errors made during the task(s) are determined.
  • Iterative design: After determining what users, tasks, and empirical measurements to include, the following iterative design steps are performed:
    1. Design the user interface
    2. Test
    3. Analyze results
    4. Repeat

The iterative design process is repeated until a sensible, user-friendly interface is created.

Methodologies

Various different strategies delineating methods for human–PC interaction design have developed since the ascent of the field during the 1980s. Most plan philosophies come from a model for how clients, originators, and specialized frameworks interface. Early techniques treated clients' psychological procedures as unsurprising and quantifiable and urged plan specialists to look at subjective science to establish zones, (for example, memory and consideration) when structuring UIs. Present day models, in general, center around a steady input and discussion between clients, creators, and specialists and push for specialized frameworks to be folded with the sorts of encounters clients need to have, as opposed to wrapping user experience around a finished framework.

  • Activity theory: utilized in HCI to characterize and consider the setting where human cooperations with PCs occur. Action hypothesis gives a structure for reasoning about activities in these specific circumstances, and illuminates design of interactions from an action driven perspective.
  • User-focused design: client focused structure (UCD) is a cutting edge, broadly rehearsed plan theory established on the possibility that clients must become the overwhelming focus in the plan of any PC framework. Clients, architects and specialized experts cooperate to determine the requirements and restrictions of the client and make a framework to support these components. Frequently, client focused plans are informed by ethnographic investigations of situations in which clients will associate with the framework. This training is like participatory design, which underscores the likelihood for end-clients to contribute effectively through shared plan sessions and workshops.
  • Principles of UI design: these standards may be considered during the design of a client interface: resistance, effortlessness, perceivability, affordance, consistency, structure and feedback.
  • Value delicate design (VSD): a technique for building innovation that accounts for the individuals who utilize the design straightforwardly, and just as well for those who the design influences, either directly or indirectly. VSD utilizes an iterative plan process that includes three kinds of examinations: theoretical, exact and specialized. Applied examinations target the understanding and articulation of the different parts of the design, and its qualities or any clashes that may emerge for the users of the design. Exact examinations are subjective or quantitative plan explore thinks about used to advise the creators' understanding regarding the clients' qualities, needs, and practices. Specialized examinations can include either investigation of how individuals use related advances, or the framework plans.

Display designs

Displays are human-made artifacts designed to support the perception of relevant system variables and to facilitate further processing of that information. Before a display is designed, the task that the display is intended to support must be defined (e.g. navigating, controlling, decision making, learning, entertaining, etc.). A user or operator must be able to process whatever information that a system generates and displays; therefore, the information must be displayed according to principles in a manner that will support perception, situation awareness, and understanding.

Thirteen principles of display design

Christopher Wickens et al. defined 13 principles of display design in their book An Introduction to Human Factors Engineering.

These principles of human perception and information processing can be utilized to create an effective display design. A reduction in errors, a reduction in required training time, an increase in efficiency, and an increase in user satisfaction are a few of the many potential benefits that can be achieved through utilization of these principles.

Certain principles may not be applicable to different displays or situations. Some principles may seem to be conflicting, and there is no simple solution to say that one principle is more important than another. The principles may be tailored to a specific design or situation. Striking a functional balance among the principles is critical for an effective design.

Perceptual principles

1. Make displays legible (or audible). A display's legibility is critical and necessary for designing a usable display. If the characters or objects being displayed cannot be discernible, then the operator cannot effectively make use of them.

2. Avoid absolute judgment limits. Do not ask the user to determine the level of a variable on the basis of a single sensory variable (e.g. color, size, loudness). These sensory variables can contain many possible levels.

3. Top-down processing. Signals are likely perceived and interpreted in accordance with what is expected based on a user's experience. If a signal is presented contrary to the user's expectation, more physical evidence of that signal may need to be presented to assure that it is understood correctly.

4. Redundancy gain. If a signal is presented more than once, it is more likely that it will be understood correctly. This can be done by presenting the signal in alternative physical forms (e.g. color and shape, voice and print, etc.), as redundancy does not imply repetition. A traffic light is a good example of redundancy, as color and position are redundant.

5. Similarity causes confusion: Use distinguishable elements. Signals that appear to be similar will likely be confused. The ratio of similar features to different features causes signals to be similar. For example, A423B9 is more similar to A423B8 than 92 is to 93. Unnecessarily similar features should be removed and dissimilar features should be highlighted.

Mental model principles

6. Principle of pictorial realism. A display should look like the variable that it represents (e.g. high temperature on a thermometer shown as a higher vertical level). If there are multiple elements, they can be configured in a manner that looks like it would in the represented environment.

7. Principle of the moving part. Moving elements should move in a pattern and direction compatible with the user's mental model of how it actually moves in the system. For example, the moving element on an altimeter should move upward with increasing altitude.

Principles based on attention

8. Minimizing information access cost or interaction cost. When the user's attention is diverted from one location to another to access necessary information, there is an associated cost in time or effort. A display design should minimize this cost by allowing for frequently accessed sources to be located at the nearest possible position. However, adequate legibility should not be sacrificed to reduce this cost.

9. Proximity compatibility principle. Divided attention between two information sources may be necessary for the completion of one task. These sources must be mentally integrated and are defined to have close mental proximity. Information access costs should be low, which can be achieved in many ways (e.g. proximity, linkage by common colours, patterns, shapes, etc.). However, close display proximity can be harmful by causing too much clutter.

10. Principle of multiple resources. A user can more easily process information across different resources. For example, visual and auditory information can be presented simultaneously rather than presenting all visual or all auditory information.

Memory principles

11. Replace memory with visual information: knowledge in the world. A user should not need to retain important information solely in working memory or retrieve it from long-term memory. A menu, checklist, or another display can aid the user by easing the use of their memory. However, the use of memory may sometimes benefit the user by eliminating the need to reference some type of knowledge in the world (e.g., an expert computer operator would rather use direct commands from memory than refer to a manual). The use of knowledge in a user's head and knowledge in the world must be balanced for an effective design.

12. Principle of predictive aiding. Proactive actions are usually more effective than reactive actions. A display should attempt to eliminate resource-demanding cognitive tasks and replace them with simpler perceptual tasks to reduce the use of the user's mental resources. This will allow the user to focus on current conditions, and to consider possible future conditions. An example of a predictive aid is a road sign displaying the distance to a certain destination.

13. Principle of consistency. Old habits from other displays will easily transfer to support processing of new displays if they are designed consistently. A user's long-term memory will trigger actions that are expected to be appropriate. A design must accept this fact and utilize consistency among different displays.

Human–computer interface

The human–computer interface can be described as the point of communication between the human user and the computer. The flow of information between the human and computer is defined as the loop of interaction. The loop of interaction has several aspects to it, including:

  • Visual Based :The visual based human computer interaction is probably the most widespread area in Human Computer Interaction (HCI) research.
  • Audio Based : The audio based interaction between a computer and a human is another important area of in HCI systems. This area deals with information acquired by different audio signals.
  • Task environment: The conditions and goals set upon the user.
  • Machine environment: The environment that the computer is connected to, e.g. a laptop in a college student's dorm room.
  • Areas of the interface: Non-overlapping areas involve processes of the human and computer not pertaining to their interaction. Meanwhile, the overlapping areas only concern themselves with the processes pertaining to their interaction.
  • Input flow: The flow of information that begins in the task environment, when the user has some task that requires using their computer.
  • Output: The flow of information that originates in the machine environment.
  • Feedback: Loops through the interface that evaluate, moderate, and confirm processes as they pass from the human through the interface to the computer and back.
  • Fit: This is the match between the computer design, the user and the task to optimize the human resources needed to accomplish the task.

Current research

Topics in human-computer interaction include the following:

User customization

End-user development studies have shown how ordinary users could routinely tailor applications to their own needs and to invent new applications based on their understanding of their own domains. With their deeper knowledge, users could increasingly be important sources of new applications at the expense of generic programmers with systems expertise but low domain expertise.

Embedded computation

Computation is passing beyond computers into every object for which uses can be found. Embedded systems make the environment alive with little computations and automated processes, from computerized cooking appliances to lighting and plumbing fixtures to window blinds to automobile braking systems to greeting cards. The expected difference in the future is the addition of networked communications that will allow many of these embedded computations to coordinate with each other and with the user. Human interfaces to these embedded devices will in many cases be disparate from those appropriate to workstations.

Augmented reality

Augmented reality refers to the notion of layering relevant information into our vision of the world. Existing projects show real-time statistics to users performing difficult tasks, such as manufacturing. Future work might include augmenting our social interactions by providing additional information about those we converse with.

Social computing

In recent years, there has been an explosion of social science research focusing on interactions as the unit of analysis. Much of this research draws from psychology, social psychology, and sociology. For example, one study found out that people expected a computer with a man's name to cost more than a machine with a woman's name. Other research finds that individuals perceive their interactions with computers more positively than humans, despite behaving the same way towards these machines.

Knowledge-driven human–computer interaction

In human and computer interactions, a semantic gap usually exists between human and computer's understandings towards mutual behaviors. Ontology, as a formal representation of domain-specific knowledge, can be used to address this problem, through solving the semantic ambiguities between the two parties.

Emotions and human-computer interaction

In the interaction of humans and computers, research has studied how computers can detect, process and react to human emotions to develop emotionally intelligent information systems. Researchers have suggested several 'affect-detection channels'. The potential of telling human emotions in an automated and digital fashion lies in improvements to the effectiveness of human-computer interaction. The influence of emotions in human-computer interaction has been studied in fields such as financial decision making using ECG and organisational knowledge sharing using eye tracking and face readers as affect-detection channels. In these fields it has been shown that affect-detection channels have the potential to detect human emotions and that information systems can incorporate the data obtained from affect-detection channels to improve decision models.

Brain–computer interfaces

A brain–computer interface (BCI), is a direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.

Factors of change

Traditionally, computer use was modeled as a human–computer dyad in which the two were connected by a narrow explicit communication channel, such as text-based terminals. Much work has been done to make the interaction between a computing system and a human more reflective of the multidimensional nature of everyday communication. Because of potential issues, human–computer interaction shifted focus beyond the interface to respond to observations as articulated by D. Engelbart: "If ease of use was the only valid criterion, people would stick to tricycles and never try bicycles."

The means by which humans interact with computers continues to evolve rapidly. Human–computer interaction is affected by developments in computing. These forces include:

  • Decreasing hardware costs leading to larger memory and faster systems
  • Miniaturization of hardware leading to portability
  • Reduction in power requirements leading to portability
  • New display technologies leading to the packaging of computational devices in new forms
  • Specialized hardware leading to new functions
  • Increased development of network communication and distributed computing
  • Increasingly widespread use of computers, especially by people who are outside of the computing profession
  • Increasing innovation in input techniques (e.g., voice, gesture, pen), combined with lowering cost, leading to rapid computerization by people formerly left out of the computer revolution.
  • Wider social concerns leading to improved access to computers by currently disadvantaged groups

As of 2010 the future for HCI is expected to include the following characteristics:

  • Ubiquitous computing and communication. Computers are expected to communicate through high speed local networks, nationally over wide-area networks, and portably via infrared, ultrasonic, cellular, and other technologies. Data and computational services will be portably accessible from many if not most locations to which a user travels.
  • High-functionality systems. Systems can have large numbers of functions associated with them. There are so many systems that most users, technical or non-technical, do not have time to learn about in the traditional way (e.g., through thick user manuals).
  • Mass availability of computer graphics. Computer graphics capabilities such as image processing, graphics transformations, rendering, and interactive animation are becoming widespread as inexpensive chips become available for inclusion in general workstations and mobile devices.
  • Mixed media. Commercial systems can handle images, voice, sounds, video, text, formatted data. These are exchangeable over communication links among users. The separate fields of consumer electronics (e.g., stereo sets, DVD players, televisions) and computers are beginning to merge. Computer and print fields are expected to cross-assimilate.
  • High-bandwidth interaction. The rate at which humans and machines interact is expected to increase substantially due to the changes in speed, computer graphics, new media, and new input/output devices. This can lead to some qualitatively different interfaces, such as virtual reality or computational video.
  • Large and thin displays. New display technologies are maturing, enabling very large displays and displays that are thin, lightweight, and low in power use. This is having large effects on portability and will likely enable developing paper-like, pen-based computer interaction systems very different in feel from present desktop workstations.
  • Information utilities. Public information utilities (such as home banking and shopping) and specialized industry services (e.g., weather for pilots) are expected to proliferate. The rate of proliferation can accelerate with the introduction of high-bandwidth interaction and the improvement in quality of interfaces.

Scientific conferences

One of the main conferences for new research in human–computer interaction is the annually held Association for Computing Machinery's (ACM) Conference on Human Factors in Computing Systems, usually referred to by its short name CHI (pronounced kai, or khai). CHI is organized by ACM Special Interest Group on Computer–Human Interaction (SIGCHI). CHI is a large conference, with thousands of attendants, and is quite broad in scope. It is attended by academics, practitioners and industry people, with company sponsors such as Google, Microsoft, and PayPal.

There are also dozens of other smaller, regional or specialized HCI-related conferences held around the world each year, including:

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...