Search This Blog

Thursday, September 3, 2020

Diana Rigg

From Wikipedia, the free encyclopedia
 

Diana Rigg

Diana Rigg 1973.jpg
Rigg in Diana in 1973
Born
Enid Diana Elizabeth Rigg

July 20, 1938
OccupationActress
Years active1957–present
Spouse(s)
(
m. 1973; div. 1976)

(
m. 1982; div. 1990)
(1 child)
ChildrenRachael Stirling

Dame Enid Diana Elizabeth Rigg (born July 20, 1938), DBE is an English actress. She played Emma Peel in the TV series The Avengers (1965–68), Countess Teresa di Vicenzo, wife of James Bond, in On Her Majesty's Secret Service (1969) and Olenna Tyrell in Game of Thrones (2013–17). She has also had a career in theatre, including playing the title role in Medea, both in London and New York, for which she won the 1994 Tony Award for Best Actress in a Play. She was made a CBE in 1988 and a Dame in 1994 for services to drama.

Rigg made her professional stage debut in 1957 in The Caucasian Chalk Circle, and joined the Royal Shakespeare Company in 1959. She made her Broadway debut in the 1971 production of Abelard & Heloise. Her film roles include Helena in A Midsummer Night's Dream (1968); Lady Holiday in The Great Muppet Caper (1981); and Arlena Marshall in Evil Under the Sun (1982). She won the BAFTA TV Award for Best Actress for the BBC miniseries Mother Love (1989), and an Emmy Award for her role as Mrs. Danvers in an adaptation of Rebecca (1997). Her other television credits include You, Me and the Apocalypse (2015), Detectorists (2015), and the Doctor Who episode "The Crimson Horror" (2013) with her daughter, Rachael Stirling.

Early life and education

Rigg was born in Doncaster, which was then in the West Riding of Yorkshire, now in South Yorkshire, in 1938, to Louis Rigg (1903–1968) and Beryl Hilda (née Helliwell; 1908–1981); her father was a railway engineer who had been born in Yorkshire. Between the ages of two months and eight years Rigg lived in Bikaner, India, where her father was employed as a railway executive.[2] She spoke Hindi as her second language in those young years.




She was later sent back to England to attend a boarding school, Fulneck Girls School, in a Moravian settlement near Pudsey. Rigg hated her boarding school, where she felt like a fish out of water, but she believes that Yorkshire played a greater part in shaping her character than India did. She trained as an actress at the Royal Academy of Dramatic Art from 1955–57, where her classmates included Glenda Jackson and Siân Phillips.

Theatre career

Rigg's career in film, television and the theatre has been wide-ranging, including roles in the Royal Shakespeare Company between 1959 and 1964. Her professional debut was as Natasha Abashwilli in the RADA production of The Caucasian Chalk Circle at the York Festival in 1957.

She returned to the stage in the Ronald Millar play Abelard and Heloïse in London in 1970, and made her Broadway debut with the play in 1971, earning the first of three Tony Award nominations for Best Actress in a Play. She received her second nomination in 1975, for The Misanthrope. A member of the National Theatre Company at the Old Vic from 1972 to 1975, Rigg took leading roles in premiere productions of two Tom Stoppard plays, Dorothy Moore in Jumpers (National Theatre, 1972) and Ruth Carson in Night and Day (Phoenix Theatre, 1978).

In 1982, she appeared in a musical called Colette, based on the life of the French writer and created by Tom Jones and Harvey Schmidt, but it closed during an American tour en route to Broadway. In 1987 she took a leading role in the West End production of Stephen Sondheim's musical Follies. In the 1990s, she had triumphs with roles at the Almeida Theatre in Islington, including Medea in 1992 (which transferred to the Wyndham's Theatre in 1993 and then Broadway in 1994, for which she received the Tony Award for Best Actress), Mother Courage at the National Theatre in 1995 and Who's Afraid of Virginia Woolf? at the Almeida Theatre in 1996 (which transferred to the Aldwych Theatre in 1997).

In 2004, she appeared as Violet Venable in Sheffield Theatres' production of Tennessee Williams's play Suddenly Last Summer, which transferred to the Albery Theatre. In 2006, she appeared at the Wyndham's Theatre in London's West End in a drama entitled Honour which had a limited but successful run. In 2007, she appeared as Huma Rojo in the Old Vic's production of All About My Mother, adapted by Samuel Adamson and based on the film of the same title directed by Pedro Almodóvar.

She appeared in 2008 in The Cherry Orchard at the Chichester Festival Theatre, returning there in 2009 to star in Noël Coward's Hay Fever. In 2011 she played Mrs. Higgins in Pygmalion at the Garrick Theatre, opposite Rupert Everett and Kara Tointon, having played Eliza Doolittle 37 years earlier at the Albery Theatre.

In February 2018, she returned to Broadway in the non-singing role of Mrs. Higgins in My Fair Lady. She commented on taking the role, "I think it's so special. When I was offered Mrs. Higgins, I thought it was just such a lovely idea." She received her fourth Tony nomination for the role.

Film and television career

Rigg appeared in the British 1960s television series The Avengers (1961–69) opposite Patrick Macnee as John Steed, playing the secret agent Emma Peel in 51 episodes, replacing Elizabeth Shepherd at very short notice when Shepherd was dropped from the role after filming two episodes. Rigg auditioned for the role on a whim, without ever having seen the programme. Although she was hugely successful in the series, she disliked the lack of privacy that it brought. Also, she was not comfortable in her position as a sex symbol. In an interview with The Guardian in 2019, Rigg stated that "becoming a sex symbol overnight had shocked" her. She also did not like the way that she was treated by production company Associated British Corporation (ABC). For her second series she held out for a pay rise from £150 a week to £450; she said in 2019—when gender pay inequality was very much in the news—that "not one woman in the industry supported me ... Neither did Patrick [Macnee, her co-star]... But I was painted as this mercenary creature by the press when all I wanted was equality. It’s so depressing that we are still talking about the gender pay gap." She did not stay for a third year. Patrick Macnee noted that Rigg had later told him that she considered Macnee and her driver to be her only friends on the set. On the big screen she became a Bond girl in On Her Majesty's Secret Service (1969), playing Tracy Bond, James Bond's only wife, opposite George Lazenby. She said she took the role with the hope that she would become better known in the United States. In 1973–1974, she starred in a short-lived US sitcom called Diana.

Her other films from this period include The Assassination Bureau (1969), Julius Caesar (1970), The Hospital (1971), Theatre of Blood (1973), In This House of Brede (1975), based on the book by Rumer Godden, and A Little Night Music (1977). She appeared as the title character in The Marquise (1980), a television adaptation of play by Noël Coward. She appeared in the Yorkshire Television production of Ibsen's Hedda Gabler (1981) in the title role, and as Lady Holiday in the film The Great Muppet Caper (also 1981). The following year she received acclaim for her performance as Arlena Marshall in the film adaptation of Agatha Christie's Evil Under the Sun, sharing barbs with her character's old rival, played by Maggie Smith.

She appeared as Regan, the king's treacherous second daughter, in a Granada Television production of King Lear (1983), which starred Laurence Olivier in the title role. As Lady Dedlock she costarred with Denholm Elliott in a television version of Dickens' Bleak House (BBC, 1985), and played the Evil Queen, Snow White's evil stepmother, in the Cannon Movie Tales's film adaptation of Snow White (1987). In 1989 she played Helena Vesey in Mother Love for the BBC; her portrayal of an obsessive mother who was prepared to do anything, even murder, to keep control of her son won Rigg the 1990 BAFTA for Best Television Actress.

In 1995, she appeared in a film adaptation for television based on Danielle Steel's Zoya as Evgenia, the main character's grandmother.

She appeared on television as Mrs Danvers in Rebecca (1997), winning an Emmy, as well as the PBS production Moll Flanders, and as the amateur detective Mrs Bradley in The Mrs Bradley Mysteries. In this BBC series, first aired in 2000, she played Gladys Mitchell's detective, Dame Beatrice Adela Le Strange Bradley, an eccentric old woman who worked for Scotland Yard as a pathologist. The series was not a critical success and did not return for a second series.

From 1989 until 2003, she hosted the PBS television series Mystery!, shown in the United States by PBS broadcaster WGBH, taking over from Vincent Price, her co-star in Theatre of Blood

She also appeared in the second series of Ricky Gervais's comedy Extras, alongside Harry Potter star Daniel Radcliffe, and in the 2006 film The Painted Veil.

In 2013 she appeared in an episode of Doctor Who in a Victorian-era based story called "The Crimson Horror" alongside her daughter Rachael Stirling, Matt Smith and Jenna-Louise Coleman. The episode had been specially written for her and her daughter by Mark Gatiss and aired as part of series 7. It was not the first time mother and daughter had appeared in the same production, that was in the 2000 NBC film In the Beginning, but the first time she had worked with her daughter and also the first time in her career her roots were accessed to find a Doncaster, Yorkshire, accent.

The same year, Rigg secured a recurring role in the third season of the HBO series Game of Thrones, portraying Lady Olenna Tyrell, a witty and sarcastic political mastermind popularly known as the Queen of Thorns, the paternal grandmother of regular character Margaery Tyrell. Her performance was well received by critics and audiences alike, and earned her an Emmy nomination for Outstanding Guest Actress in a Drama Series for the 65th Primetime Emmy Awards in 2013. She reprised her role in season four of Game of Thrones, and in July 2014 received another Guest Actress Emmy nomination. In 2015 and 2016, she again reprised the role in seasons five and six in an expanded role from the books. The character was finally killed off in the seventh season, with Rigg's final performance receiving critical acclaim. In April 2019 Rigg said she had never watched Game of Thrones, before or after her time on the show.

Personal life

In the 1960s, Rigg lived for eight years with director Philip Saville, gaining attention in the tabloids when she disclaimed interest in marrying the older, already-married Saville, saying she had no desire "to be respectable." She was married to Menachem Gueffen, an Israeli painter, from 1973 until their divorce in 1976, and to Archibald Stirling, a theatrical producer and former officer in the Scots Guards, from 25 March 1982, until their divorce in 1990 after his affair with the actress Joely Richardson. With Stirling, Rigg has a daughter, actress Rachael Stirling, who was born in 1977.

Rigg has long been an outspoken critic of feminism, saying in 1969, "Women are in a much stronger position than men."

Rigg is a Patron of International Care & Relief and was for many years the public face of the charity's child sponsorship scheme. She was also Chancellor of the University of Stirling, a ceremonial rather than executive role, and was succeeded by James Naughtie when her ten-year term of office ended on 31 July 2008.

Michael Parkinson, who first interviewed Rigg in 1972, described her as the most desirable woman he ever met, who "radiated a lustrous beauty". A smoker from the age of 18, Rigg was still smoking 20 cigarettes a day in 2009. By December 2017, she had stopped smoking after serious illness led to heart surgery, a cardiac ablation, two months earlier. A devout Christian, she commented that: "My heart had stopped ticking during the procedure, so I was up there and the good Lord must have said, 'Send the old bag down again, I'm not having her yet!'"

In a June 2015 interview with Stephen Bowie of The A.V. Club, Rigg also commented about the chemistry between Patrick Macnee and herself on The Avengers, despite being 16 years apart: "I sort of vaguely knew Patrick Macnee, and he looked kindly on me and sort of husbanded me through the first couple of episodes. After that we became equal, and loved each other and sparked off each other. And we'd then improvise, write our own lines. They trusted us. Particularly our scenes when we were finding a dead body—I mean, another dead body. How do you get 'round that one? They allowed us to do it." She also said about the improvisation of the dialogue: "Not for an instant, no. Well, when I say improvising, Pat and I would sit down and work out approximately what we'd say. It wasn't sort of...who's the American duo? Mike Nichols and Elaine May. It was definitely not that." Asked if she had ever stayed in touch with Macnee (the interview was published two days before Macnee's death and decades after they were reunited for one last time on her short-lived American series Diana): "You'll always be close to somebody that you worked with very intimately for so long, and you become really fond of each other. But we haven't seen each other for a very, very long time."

Her first grandchild, a boy named Jack (born to Rachael Stirling and Elbow frontman Guy Garvey), was born in April 2017.

Filmography

Film

Year Title Role
Year Title
Notes
1968 A Midsummer Night's Dream Helena
1969 Mini-Killers Short film
The Assassination Bureau Sonya Winter
On Her Majesty's Secret Service Teresa "Tracy" di Vicenzo
1970 Julius Caesar Portia
1971 The Hospital Barbara Drummond
1973 Theatre of Blood Edwina Lionheart
1977 A Little Night Music Countess Charlotte Mittelheim
1981 The Great Muppet Caper Lady Holiday
1982 Evil Under the Sun Arlena Marshall
1986 The Worst Witch Miss Hardbroom
1987 Snow White The Evil Queen
1994 A Good Man in Africa Chloe Fanshawe
1999 Parting Shots Lisa
2005 Heidi Grandmamma
2006 The Painted Veil Mother Superior
2015 The Honourable Rebel Narrator
2017 Breathe Lady Neville
2021 Last Night in Soho Miss Collins Post-production

Television

Year Title Role Notes
1959 A Midsummer Night's Dream Bit part TV film
1963 The Sentimental Agent Francy Wilde Episode: "A Very Desirable Plot"
1964 Festival Adriana Episode: "The Comedy of Errors"
Armchair Theatre Anita Fender Episode: "The Hothouse"
1965 ITV Play of the Week Bianca Episode: "Women Beware Women"
1965–68 The Avengers Emma Peel Main role (51 episodes)
1970 ITV Saturday Night Theatre Liz Jardine Episode: "Married Alive"
1973–74 Diana Diana Smythe Main role (15 episodes)
1974 Affairs of the Heart Grace Gracedew Episode: "Grace"
1975 In This House of Brede Philippa TV film
The Morecambe & Wise Show Nell Gwynne Sketch in Christmas Show
1977 Three Piece Suite Various Regular role (6 episodes)
1979 Oresteia Clytemnestra TV miniseries
1980 The Marquise Eloise TV film
1981 Hedda Gabler Hedda Gabler TV film
1982 Play of the Month Rita Allmers Episode: Little Eyolf
Witness for the Prosecution Christine Vole TV film
1983 King Lear Regan TV film
1985 Bleak House Lady Honoria Dedlock TV miniseries
1986 The Worst Witch Miss Constance Hardbroom TV film
1987 A Hazard of Hearts Lady Harriet Vulcan TV film
1989 The Play on One Lydia Episode: "Unexplained Laughter"
Mother Love Helena Vesey TV miniseries
British Academy Television Award for Best Actress
Broadcast Press Guild Award for Best Actress
1992 Mrs. 'Arris Goes to Paris Mme. Colbert TV film
1993 Road to Avonlea Lady Blackwell Episode: "The Disappearance"
Running Delilah Judith TV film
Screen Two Baroness Frieda von Stangel Episode: "Genghis Cohn"
Nominated – CableACE Award for Best Supporting Actress in a Miniseries or Movie
1995 Zoya Evgenia TV film
The Haunting of Helen Walker Mrs. Grose TV film
1996 The Fortunes and Misfortunes of Moll Flanders Mrs. Golightly TV film
Samson and Delilah Mara TV film
1997 Rebecca Mrs. Danvers TV miniseries
Primetime Emmy Award for Outstanding Supporting Actress in a Miniseries or a Movie
1998 The American Madame de Bellegarde TV film
1998–2000 The Mrs Bradley Mysteries Mrs. Adela Bradley Main role (5 episodes)
2000 In the Beginning Mature Rebeccah TV film
2001 Victoria & Albert Baroness Lehzen TV miniseries
Nominated – Primetime Emmy Award for Outstanding Supporting Actress in a Miniseries or a Movie
2003 Murder in Mind Jill Craig Episode: "Suicide"
Charles II: The Power and the Passion Queen Henrietta Maria TV miniseries
2006 Extras Herself Episode: "Daniel Radcliffe"
2013–17 Game of Thrones Olenna Tyrell 18 episodes
Nominated – Primetime Emmy Award for Outstanding Guest Actress in a Drama Series (2013, 2014, 2015, 2018)
Nominated – Critics' Choice Television Award for Best Guest Performer in a Drama Series (2013, 2014)
2013 Doctor Who Mrs. Winifred Gillyflower Episode: "The Crimson Horror"
2015, 2017 Penn Zero: Part-Time Hero Mayor Pink Panda (voice) 3 episodes
Detectorists Veronica 6 episodes
2015 You, Me and the Apocalypse Sutton 5 episodes
Professor Branestawm Returns Lady Pagwell TV film
2017 Victoria Duchess of Buccleuch 9 episodes
2019 The Snail and the Whale Narrator Short television film
2020 All Creatures Great and Small Mrs. Pumphrey Upcoming TV series
Black Narcissus Mother Dorothea Upcoming miniseries

Theatre credits

List of selected theatre credits

Year Title Role Notes
1957 The Caucasian Chalk Circle Natella Abashwili Theatre Royal, York Festival
1964 King Lear Cordelia Royal Shakespeare Company (European/US Tour)
1966 Twelfth Night Viola Royal Shakespeare Company
1970 Abelard and Heloise Heloise Wyndham's Theatre, London
1971 Abelard and Heloise Heloise Brooks Atkinson Theatre, New York
1972 Macbeth Lady Macbeth Old Vic Theatre, London
1972 Jumpers Dorothy Moore Old Vic Theatre, London
1973 The Misanthrope Célimène Old Vic Theatre, London
1974 Pygmalion Eliza Doolittle Albery Theatre, London
1975 The Misanthrope Célimène St. James Theatre, New York
1978 Night and Day Ruth Carson Phoenix Theatre, London
1982 Colette Colette US national tour
1983 Heartbreak House Lady Ariadne Utterword Theatre Royal Haymarket, London
1985 Little Eyolf Rita Allmers Lyric Theatre, Hammersmith, London
1985 Antony and Cleopatra Cleopatra Chichester Festival Theatre, UK
1986 Wildfire Bess Theatre Royal, Bath & Phoenix Theatre, London
1987 Follies Phyllis Rogers Stone Shaftesbury Theatre, London
1990 Love Letters Melissa Stage Door Theatre, San Francisco
1992 Putting It Together Old Fire Station Theatre, Oxford
1992 Berlin Bertie Rosa Royal Court Theatre, London
1992 Medea Medea Almeida Theatre, London
1993 Medea Medea Wyndham's Theatre, London
1994 Medea Medea Longacre Theatre, New York
1995 Mother Courage and Her Children Mother Courage National Theatre, London
1996 Who's Afraid of Virginia Woolf Martha Almeida Theatre, London
1997 Who's Afraid of Virginia Woolf Martha Aldwych Theatre, London
1998 Phaedra Phaedra Almeida at the Albery Theatre, London & BAM in Brooklyn
1998 Britannicus Agrippina Almeida at the Albery Theatre, London & BAM in Brooklyn
2001 Humble Boy Flora Humble National Theatre, London
2002 The Hollow Crown International Tour: New Zealand, Australia, Stratford-upon-Avon, UK
2004 Suddenly, Last Summer Violet Venable Albery Theatre, London
2006 Honour Honour Wyndham's Theatre, London
2007 All About My Mother Huma Rojo Old Vic Theatre, London
2008 The Cherry Orchard Ranyevskaya Chichester Festival Theatre, UK
2009 Hay Fever Judith Bliss Chichester Festival Theatre, UK
2011 Pygmalion Mrs. Higgins Garrick Theatre, London
2018 My Fair Lady Mrs. Higgins Vivian Beaumont Theatre, New York

Honours, awards and nominations

Rigg received honorary degrees from the University of Stirling in 1988, the University of Leeds in 1992, and London South Bank University in 1996.





In 2014, Rigg received the Will Award, presented by the Shakespeare Theatre Company, along with Stacy Keach and John Hurt.


On 25 October 2015, to mark 50 years of Emma Peel, the BFI (British Film Institute) screened an episode of The Avengers followed by an onstage interview with Rigg about her time in the television series.

Year Award Category Work Result
1967 Emmy Award Best Actress in a Drama Series The Avengers Nominated
1968 Nominated
1970 Laurel Award Female New Face The Assassination Bureau Nominated
1971 Tony Award Best Actress in a Play Abelard and Heloise Nominated
1972 Golden Globe Best Supporting Actress (motion picture) The Hospital Nominated
1975 Tony Award Best Actress in a Play The Misanthrope Nominated
Drama Desk Award Outstanding Actress in a Play Nominated
Emmy Award Best Actress in a TV Movie In This House of Brede Nominated
1990 BAFTA TV Award Best Actress Mother Love Won
Broadcasting Press Guild Award Best Actress Won
1992 Evening Standard Award Best Actress Medea Won
1994 Olivier Award Best Actress Nominated
Drama Desk Award Outstanding Actress in a Play Nominated
Tony Award Best Actress in a Play Won
1996 CableACE Award Supporting Actress in a Movie or Miniseries Screen Two (1985) – episode "Genghis Cohn" Nominated
Olivier Award Best Actress Mother Courage Nominated
Evening Standard Award Best Actress Mother Courage and Who's Afraid of Virginia Woolf Won
1997 Olivier Award Best Actress Who's Afraid of Virginia Woolf Nominated
Emmy Award Best Supporting Actress in a Miniseries or TV Movie Rebecca Won
1999 Olivier Award Best Actress Britannicus and Phedre Nominated
2000 Special BAFTA Award[47] non-competitive John Steed's partners shared with Honor Blackman, Linda Thorson and Joanna Lumley. The Avengers (and The New Avengers) Awarded
2002 Emmy Award Best Supporting Actress in a Miniseries or TV Movie Victoria & Albert Nominated
2013 Critics' Choice Television Award Best Guest Performer in a Drama Series Game of Thrones Nominated
Emmy Award Outstanding Guest Actress in a Drama Series Nominated
2014 Critics' Choice Television Award Best Guest Performer in a Drama Series Nominated
Emmy Award Outstanding Guest Actress in a Drama Series Nominated
2015 Emmy Award Outstanding Guest Actress in a Drama Series Nominated
2018 Drama Desk Award Best Featured Actress in a Musical My Fair Lady Nominated
Tony Award Best Featured Actress in a Musical Nominated
Emmy Award Outstanding Guest Actress in a Drama Series Game of Thrones Nominated
2019 Canneseries Variety Icon Award N/A Won

Wednesday, September 2, 2020

Digital library

From Wikipedia, the free encyclopedia
 
A digital library, digital repository, or digital collection, is an online database of digital objects that can include text, still images, audio, video, digital documents, or other digital media formats. Objects can consist of digitized content like print or photographs, as well as originally produced digital content like word processor files or social media posts. In addition to storing content, digital libraries provide means for organizing, searching, and retrieving the content contained in the collection.

Digital libraries can vary immensely in size and scope, and can be maintained by individuals or organizations. The digital content may be stored locally, or accessed remotely via computer networks. These information retrieval systems are able to exchange information with each other through interoperability and sustainability.

History

The early history of digital libraries is not well documented, but several key thinkers are connected to the emergence of the concept. Predecessors include Paul Otlet and Henri La Fontaine's Mundaneum, an attempt begun in 1895 to gather and systematically catalogue the world's knowledge, with the hope of bringing about world peace. The visions of the digital library were largely realized a century later during the great expansion of the Internet, with access to the books and searching of the documents by millions of individuals on the World Wide Web. 

Vannevar Bush and J.C.R. Licklider are two contributors that advanced this idea into then current technology. Bush had supported research that led to the bomb that was dropped on Hiroshima. After seeing the disaster, he wanted to create a machine that would show how technology can lead to understanding instead of destruction. This machine would include a desk with two screens, switches and buttons, and a keyboard. He named this the "Memex." This way individuals would be able to access stored books and files at a rapid speed. In 1956, Ford Foundation funded Licklider to analyze how libraries could be improved with technology. Almost a decade later, his book entitled "Libraries of the Future" included his vision. He wanted to create a system that would use computers and networks so human knowledge would be accessible for human needs and feedback would be automatic for machine purposes. This system contained three components, the corpus of knowledge, the question, and the answer. Licklider called it a procognitive system. 

Early projects centered on the creation of an electronic card catalogue known as Online Public Access Catalog (OPAC). By the 1980s, the success of these endeavors resulted in OPAC replacing the traditional card catalog in many academic, public and special libraries. This permitted libraries to undertake additional rewarding co-operative efforts to support resource sharing and expand access to library materials beyond an individual library. 

An early example of a digital library is the Education Resources Information Center (ERIC), a database of education citations, abstracts and texts that was created in 1964 and made available online through DIALOG in 1969.

In 1994, digital libraries became widely visible in the research community due to a $24.4 million NSF managed program supported jointly by DARPA's Intelligent Integration of Information (I3) program, NASA, and NSF itself  . Successful research proposals came from six U.S. universities.  The universities included Carnegie Mellon University, University of California-Berkeley, University of Michigan, University of Illinois, University of California-Santa Barbara, and Stanford University. Articles from the projects summarized their progress at their halfway point in May 1996. Stanford research, by Sergey Brin and Larry Page led to the founding of Google.

Early attempts at creating a model for digital libraries included the DELOS Digital Library Reference Model and the 5S Framework.

Terminology

The term digital library was first popularized by the NSF/DARPA/NASA Digital Libraries Initiative in 1994. With the availability of the computer networks the information resources are expected to stay distributed and accessed as needed, whereas in Vannevar Bush's essay As We May Think (1945) they were to be collected and kept within the researcher's Memex.

The term virtual library was initially used interchangeably with digital library, but is now primarily used for libraries that are virtual in other senses (such as libraries which aggregate distributed content). In the early days of digital libraries, there was discussion of the similarities and differences among the terms digital, virtual, and electronic.

A distinction is often made between content that was created in a digital format, known as born-digital, and information that has been converted from a physical medium, e.g. paper, through digitization. Not all electronic content is in digital data format. The term hybrid library is sometimes used for libraries that have both physical collections and electronic collections. For example, American Memory is a digital library within the Library of Congress.

Some important digital libraries also serve as long term archives, such as arXiv and the Internet Archive. Others, such as the Digital Public Library of America, seek to make digital information from various institutions widely accessible online.

Types of digital libraries

Institutional repositories

Many academic libraries are actively involved in building institutional repositories of the institution's books, papers, theses, and other works which can be digitized or were 'born digital'. Many of these repositories are made available to the general public with few restrictions, in accordance with the goals of open access, in contrast to the publication of research in commercial journals, where the publishers often limit access rights. Institutional, truly free, and corporate repositories are sometimes referred to as digital libraries. Institutional repository software is designed for archiving, organizing, and searching a library's content. Popular open-source solutions include DSpace, EPrints, Digital Commons, and Fedora Commons-based systems Islandora and Samvera.

National library collections

Legal deposit is often covered by copyright legislation and sometimes by laws specific to legal deposit, and requires that one or more copies of all material published in a country should be submitted for preservation in an institution, typically the national library. Since the advent of electronic documents, legislation has had to be amended to cover the new formats, such as the 2016 amendment to the Copyright Act 1968 in Australia.

Since then various types of electronic depositories have been built. The British Library’s Publisher Submission Portal and the German model at the Deutsche Nationalbibliothek have one deposit point for a network of libraries, but public access is only available in the reading rooms in the libraries. The Australian National edeposit system has the same features, but also allows for remote access by the general public for most of the content.

Digital archives

Physical archives differ from physical libraries in several ways. Traditionally, archives are defined as:
  1. Containing primary sources of information (typically letters and papers directly produced by an individual or organization) rather than the secondary sources found in a library (books, periodicals, etc.).
  2. Having their contents organized in groups rather than individual items.
  3. Having unique contents.
The technology used to create digital libraries is even more revolutionary for archives since it breaks down the second and third of these general rules. In other words, "digital archives" or "online archives" will still generally contain primary sources, but they are likely to be described individually rather than (or in addition to) in groups or collections. Further, because they are digital, their contents are easily reproducible and may indeed have been reproduced from elsewhere. The Oxford Text Archive is generally considered to be the oldest digital archive of academic physical primary source materials.

Archives differ from libraries in the nature of the materials held. Libraries collect individual published books and serials, or bounded sets of individual items. The books and journals held by libraries are not unique, since multiple copies exist and any given copy will generally prove as satisfactory as any other copy. The material in archives and manuscript libraries are "the unique records of corporate bodies and the papers of individuals and families".

A fundamental characteristic of archives is that they have to keep the context in which their records have been created and the network of relationships between them in order to preserve their informative content and provide understandable and useful information over time. The fundamental characteristic of archives resides in their hierarchical organization expressing the context by means of the archival bond. Archival descriptions are the fundamental means to describe, understand, retrieve and access archival material. At the digital level, archival descriptions are usually encoded by means of the Encoded Archival Description XML format. The EAD is a standardized electronic representation of archival description which makes it possible to provide union access to detailed archival descriptions and resources in repositories distributed throughout the world.

Given the importance of archives, a dedicated formal model, called NEsted SeTs for Object Hierarchies (NESTOR), built around their peculiar constituents, has been defined. NESTOR is based on the idea of expressing the hierarchical relationships between objects through the inclusion property between sets, in contrast to the binary relation between nodes exploited by the tree. NESTOR has been used to formally extend the 5S model to define a digital archive as a specific case of digital library able to take into consideration the peculiar features of archives.

Features of digital libraries

The advantages of digital libraries as a means of easily and rapidly accessing books, archives and images of various types are now widely recognized by commercial interests and public bodies alike.

Traditional libraries are limited by storage space; digital libraries have the potential to store much more information, simply because digital information requires very little physical space to contain it. As such, the cost of maintaining a digital library can be much lower than that of a traditional library. A physical library must spend large sums of money paying for staff, book maintenance, rent, and additional books. Digital libraries may reduce or, in some instances, do away with these fees. Both types of library require cataloging input to allow users to locate and retrieve material. Digital libraries may be more willing to adopt innovations in technology providing users with improvements in electronic and audio book technology as well as presenting new forms of communication such as wikis and blogs; conventional libraries may consider that providing online access to their OP AC catalog is sufficient. An important advantage to digital conversion is increased accessibility to users. They also increase availability to individuals who may not be traditional patrons of a library, due to geographic location or organizational affiliation.
  • No physical boundary. The user of a digital library need not to go to the library physically; people from all over the world can gain access to the same information, as long as an Internet connection is available.
  • Round the clock availability A major advantage of digital libraries is that people can gain access 24/7 to the information.
  • Multiple access. The same resources can be used simultaneously by a number of institutions and patrons. This may not be the case for copyrighted material: a library may have a license for "lending out" only one copy at a time; this is achieved with a system of digital rights management where a resource can become inaccessible after expiration of the lending period or after the lender chooses to make it inaccessible (equivalent to returning the resource).
  • Information retrieval. The user is able to use any search term (word, phrase, title, name, subject) to search the entire collection. Digital libraries can provide very user-friendly interfaces, giving click able access to its resources.
  • Preservation and conservation. Digitization is not a long-term preservation solution for physical collections, but does succeed in providing access copies for materials that would otherwise fall to degradation from repeated use. Digitized collections and born-digital objects pose many preservation and conservation concerns that analog materials do not. Please see the following "Problems" section of this page for examples.
  • Space. Whereas traditional libraries are limited by storage space, digital libraries have the potential to store much more information, simply because digital information requires very little physical space to contain them and media storage technologies are more affordable than ever before.
  • Added value. Certain characteristics of objects, primarily the quality of images, may be improved. Digitization can enhance legibility and remove visible flaws such as stains and discoloration.
  • Easily accessible.

Software

There are a number of software packages for use in general digital libraries, for notable ones see Digital library software. Institutional repository software, which focuses primarily on ingest, preservation and access of locally produced documents, particularly locally produced academic outputs, can be found in Institutional repository software. This software may be proprietary, as is the case with the Library of Congress which uses Digiboard and CTS to manage digital content.

The design and implementation in digital libraries are constructed so computer systems and software can make use of the information when it is exchanged. These are referred to as semantic digital libraries. Semantic libraries are also used to socialize with different communities from a mass of social networks. DjDL is a type of semantic digital library. Keywords-based and semantic search are the two main types of searches. A tool is provided in the semantic search that create a group for augmentation and refinement for keywords-based search. Conceptual knowledge used in DjDL is centered around two forms; the subject ontology and the set of concept search patterns based on the ontology. The three type of ontologies that are associated to this search are bibliographic ontologies, community-aware ontologies, and subject ontologies.

Metadata

In traditional libraries, the ability to find works of interest is directly related to how well they were cataloged. While cataloging electronic works digitized from a library's existing holding may be as simple as copying or moving a record from the print to the electronic form, complex and born-digital works require substantially more effort. To handle the growing volume of electronic publications, new tools and technologies have to be designed to allow effective automated semantic classification and searching. While full-text search can be used for some items, there are many common catalog searches which cannot be performed using full text, including:
  • finding texts which are translations of other texts
  • differentiating between editions/volumes of a text/periodical
  • inconsistent descriptors (especially subject headings)
  • missing, deficient or poor quality taxonomy practices
  • linking texts published under pseudonyms to the real authors (Samuel Clemens and Mark Twain, for example)
  • differentiating non-fiction from parody (The Onion from The New York Times)

Searching

Most digital libraries provide a search interface which allows resources to be found. These resources are typically deep web (or invisible web) resources since they frequently cannot be located by search engine crawlers. Some digital libraries create special pages or sitemaps to allow search engines to find all their resources. Digital libraries frequently use the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) to expose their metadata to other digital libraries, and search engines like Google Scholar, Yahoo! and Scirus can also use OAI-PMH to find these deep web resources.

There are two general strategies for searching a federation of digital libraries: distributed searching and searching previously harvested metadata.

Distributed searching typically involves a client sending multiple search requests in parallel to a number of servers in the federation. The results are gathered, duplicates are eliminated or clustered, and the remaining items are sorted and presented back to the client. Protocols like Z39.50 are frequently used in distributed searching. A benefit to this approach is that the resource-intensive tasks of indexing and storage are left to the respective servers in the federation. A drawback to this approach is that the search mechanism is limited by the different indexing and ranking capabilities of each database; therefore, making it difficult to assemble a combined result consisting of the most relevant found items.

Searching over previously harvested metadata involves searching a locally stored index of information that has previously been collected from the libraries in the federation. When a search is performed, the search mechanism does not need to make connections with the digital libraries it is searching - it already has a local representation of the information. This approach requires the creation of an indexing and harvesting mechanism which operates regularly, connecting to all the digital libraries and querying the whole collection in order to discover new and updated resources. OAI-PMH is frequently used by digital libraries for allowing metadata to be harvested. A benefit to this approach is that the search mechanism has full control over indexing and ranking algorithms, possibly allowing more consistent results. A drawback is that harvesting and indexing systems are more resource-intensive and therefore expensive.

Digital preservation

Digital preservation aims to ensure that digital media and information systems are still interpretable into the indefinite future. Each necessary component of this must be migrated, preserved or emulated. Typically lower levels of systems (floppy disks for example) are emulated, bit-streams (the actual files stored in the disks) are preserved and operating systems are emulated as a virtual machine. Only where the meaning and content of digital media and information systems are well understood is migration possible, as is the case for office documents. However, at least one organization, the Wider Net Project, has created an offline digital library, the eGranary, by reproducing materials on a 6 TB hard drive. Instead of a bit-stream environment, the digital library contains a built-in proxy server and search engine so the digital materials can be accessed using an Internet browser. Also, the materials are not preserved for the future. The eGranary is intended for use in places or situations where Internet connectivity is very slow, non-existent, unreliable, unsuitable or too expensive.

In the past few years, procedures for digitizing books at high speed and comparatively low cost have improved considerably with the result that it is now possible to digitize millions of books per year. Google book-scanning project is also working with libraries to offer digitize books pushing forward on the digitize book realm.

Copyright and licensing

Digital libraries are hampered by copyright law because, unlike with traditional printed works, the laws of digital copyright are still being formed. The republication of material on the web by libraries may require permission from rights holders, and there is a conflict of interest between libraries and the publishers who may wish to create online versions of their acquired content for commercial purposes. In 2010, it was estimated that twenty-three percent of books in existence were created before 1923 and thus out of copyright. Of those printed after this date, only five percent were still in print as of 2010. Thus, approximately seventy-two percent of books were not available to the public.
There is a dilution of responsibility that occurs as a result of the distributed nature of digital resources. Complex intellectual property matters may become involved since digital material is not always owned by a library. The content is, in many cases, public domain or self-generated content only. Some digital libraries, such as Project Gutenberg, work to digitize out-of-copyright works and make them freely available to the public. An estimate of the number of distinct books still existent in library catalogues from 2000 BC to 1960, has been made.

The Fair Use Provisions (17 USC § 107) under the Copyright Act of 1976 provide specific guidelines under which circumstances libraries are allowed to copy digital resources. Four factors that constitute fair use are "Purpose of the use, Nature of the work, Amount or substantiality used and Market impact."

Some digital libraries acquire a license to lend their resources. This may involve the restriction of lending out only one copy at a time for each license, and applying a system of digital rights management for this purpose (see also above). 

The Digital Millennium Copyright Act of 1998 was an act created in the United States to attempt to deal with the introduction of digital works. This Act incorporates two treaties from the year 1996. It criminalizes the attempt to circumvent measures which limit access to copyrighted materials. It also criminalizes the act of attempting to circumvent access control. This act provides an exemption for nonprofit libraries and archives which allows up to three copies to be made, one of which may be digital. This may not be made public or distributed on the web, however. Further, it allows libraries and archives to copy a work if its format becomes obsolete.

Copyright issues persist. As such, proposals have been put forward suggesting that digital libraries be exempt from copyright law. Although this would be very beneficial to the public, it may have a negative economic effect and authors may be less inclined to create new works.

Another issue that complicates matters is the desire of some publishing houses to restrict the use of digit materials such as e-books purchased by libraries. Whereas with printed books, the library owns the book until it can no longer be circulated, publishers want to limit the number of times an e-book can be checked out before the library would need to repurchase that book. "[HarperCollins] began licensing use of each e-book copy for a maximum of 26 loans. This affects only the most popular titles and has no practical effect on others. After the limit is reached, the library can repurchase access rights at a lower cost than the original price." While from a publishing perspective, this sounds like a good balance of library lending and protecting themselves from a feared decrease in book sales, libraries are not set up to monitor their collections as such. They acknowledge the increased demand of digital materials available to patrons and the desire of a digital library to become expanded to include best sellers, but publisher licensing may hinder the process.

Recommendation systems

Many digital libraries offer recommender systems to reduce information overload and help their users discovering relevant literature. Some examples of digital libraries offering recommender systems are IEEE Xplore, Europeana, and GESIS Sowiport. The recommender systems work mostly based on content-based filtering but also other approaches are used such as collaborative filtering and citation-based recommendations. Beel et al. report that there are more than 90 different recommendation approaches for digital libraries, presented in more than 200 research articles.

Typically, digital libraries develop and maintain their own recommender systems based on existing search and recommendation frameworks such as Apache Lucene or Apache Mahout. However, there are also some recommendation-as-a-service provider specializing in offering a recommender system for digital libraries as a service.

Drawbacks of digital libraries

Digital libraries, or at least their digital collections, unfortunately also have brought their own problems and challenges in areas such as:
There are many large scale digitisation projects that perpetuate these problems.

Future development

Large scale digitization projects are underway at Google, the Million Book Project, and Internet Archive. With continued improvements in book handling and presentation technologies such as optical character recognition and development of alternative depositories and business models, digital libraries are rapidly growing in popularity. Just as libraries have ventured into audio and video collections, so have digital libraries such as the Internet Archive. Google Books project recently received a court victory on proceeding with their book-scanning project that was halted by the Authors' guild. This helped open the road for libraries to work with Google to better reach patrons who are accustomed to computerized information. 

According to Larry Lannom, Director of Information Management Technology at the nonprofit Corporation for National Research Initiatives (CNRI), "all the problems associated with digital libraries are wrapped up in archiving." He goes on to state, "If in 100 years people can still read your article, we'll have solved the problem." Daniel Akst, author of The Webster Chronicle, proposes that "the future of libraries — and of information — is digital." Peter Lyman and Hal Variant, information scientists at the University of California, Berkeley, estimate that "the world's total yearly production of print, film, optical, and magnetic content would require roughly 1.5 billion gigabytes of storage." Therefore, they believe that "soon it will be technologically possible for an average person to access virtually all recorded information."

Collection development and content selection decisions for the libraries' electronic resources typically involve various qualitative and quantitative methods. In the 2020s, libraries have expanded the usage of open source data analysis strumentation like the non-profit Unpaywall Journals which combines several methods.

Optical character recognition

From Wikipedia, the free encyclopedia
 
Video of the process of scanning and real-time optical character recognition (OCR) with a portable scanner.

Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo (for example the text on signs and billboards in a landscape photo) or from subtitle text superimposed on an image (for example: from a television broadcast).

Widely used as a form of data entry from printed paper data records – whether passport documents, invoices, bank statements, computerized receipts, business cards, mail, printouts of static-data, or any suitable documentation – it is a common method of digitizing printed texts so that they can be electronically edited, searched, stored more compactly, displayed on-line, and used in machine processes such as cognitive computing, machine translation, (extracted) text-to-speech, key data and text mining. OCR is a field of research in pattern recognition, artificial intelligence and computer vision.

Early versions needed to be trained with images of each character, and worked on one font at a time. Advanced systems capable of producing a high degree of recognition accuracy for most fonts are now common, and with support for a variety of digital image file format inputs. Some systems are capable of reproducing formatted output that closely approximates the original page including images, columns, and other non-textual components.

History

Early optical character recognition may be traced to technologies involving telegraphy and creating reading devices for the blind. In 1914, Emanuel Goldberg developed a machine that read characters and converted them into standard telegraph code. Concurrently, Edmund Fournier d'Albe developed the Optophone, a handheld scanner that when moved across a printed page, produced tones that corresponded to specific letters or characters.

In the late 1920s and into the 1930s Emanuel Goldberg developed what he called a "Statistical Machine" for searching microfilm archives using an optical code recognition system. In 1931 he was granted USA Patent number 1,838,389 for the invention. The patent was acquired by IBM.

Blind and visually impaired users

In 1974, Ray Kurzweil started the company Kurzweil Computer Products, Inc. and continued development of omni-font OCR, which could recognize text printed in virtually any font (Kurzweil is often credited with inventing omni-font OCR, but it was in use by companies, including CompuScan, in the late 1960s and 1970s). Kurzweil decided that the best application of this technology would be to create a reading machine for the blind, which would allow blind people to have a computer read text to them out loud. This device required the invention of two enabling technologies – the CCD flatbed scanner and the text-to-speech synthesizer. On January 13, 1976, the successful finished product was unveiled during a widely reported news conference headed by Kurzweil and the leaders of the National Federation of the Blind. In 1978, Kurzweil Computer Products began selling a commercial version of the optical character recognition computer program. LexisNexis was one of the first customers, and bought the program to upload legal paper and news documents onto its nascent online databases. Two years later, Kurzweil sold his company to Xerox, which had an interest in further commercializing paper-to-computer text conversion. Xerox eventually spun it off as Scansoft, which merged with Nuance Communications.

In the 2000s, OCR was made available online as a service (WebOCR), in a cloud computing environment, and in mobile applications like real-time translation of foreign-language signs on a smartphone. With the advent of smart-phones and smartglasses, OCR can be used in internet connected mobile device applications that extract text captured using the device's camera. These devices that do not have OCR functionality built into the operating system will typically use an OCR API to extract the text from the image file captured and provided by the device. The OCR API returns the extracted text, along with information about the location of the detected text in the original image back to the device app for further processing (such as text-to-speech) or display.

Various commercial and open source OCR systems are available for most common writing systems, including Latin, Cyrillic, Arabic, Hebrew, Indic, Bengali (Bangla), Devanagari, Tamil, Chinese, Japanese, and Korean characters.

Applications

OCR engines have been developed into many kinds of domain-specific OCR applications, such as receipt OCR, invoice OCR, check OCR, legal billing document OCR.
They can be used for:
  • Data entry for business documents, e.g. Cheque, passport, invoice, bank statement and receipt
  • Automatic number plate recognition
  • In airports, for passport recognition and information extraction
  • Automatic insurance documents key information extraction
  • Traffic sign recognition
  • Extracting business card information into a contact list
  • More quickly make textual versions of printed documents, e.g. book scanning for Project Gutenberg
  • Make electronic images of printed documents searchable, e.g. Google Books
  • Converting handwriting in real time to control a computer (pen computing)
  • Defeating CAPTCHA anti-bot systems, though these are specifically designed to prevent OCR. The purpose can also be to test the robustness of CAPTCHA anti-bot systems.
  • Assistive technology for blind and visually impaired users
  • Writing the instructions for vehicles by identifying CAD images in a database that are appropriate to the vehicle design as it changes in real time.
  • Making scanned documents searchable by converting them to searchable PDFs

Types

OCR is generally an "offline" process, which analyses a static document. There are cloud based services which provide an online OCR API service. Handwriting movement analysis can be used as input to handwriting recognition. Instead of merely using the shapes of glyphs and words, this technique is able to capture motions, such as the order in which segments are drawn, the direction, and the pattern of putting the pen down and lifting it. This additional information can make the end-to-end process more accurate. This technology is also known as "on-line character recognition", "dynamic character recognition", "real-time character recognition", and "intelligent character recognition".

Techniques

Pre-processing

OCR software often "pre-processes" images to improve the chances of successful recognition. Techniques include:
  • De-skew – If the document was not aligned properly when scanned, it may need to be tilted a few degrees clockwise or counterclockwise in order to make lines of text perfectly horizontal or vertical.
  • Despeckle – remove positive and negative spots, smoothing edges
  • Binarisation – Convert an image from color or greyscale to black-and-white (called a "binary image" because there are two colors). The task of binarisation is performed as a simple way of separating the text (or any other desired image component) from the background. The task of binarisation itself is necessary since most commercial recognition algorithms work only on binary images since it proves to be simpler to do so. In addition, the effectiveness of the binarisation step influences to a significant extent the quality of the character recognition stage and the careful decisions are made in the choice of the binarisation employed for a given input image type; since the quality of the binarisation method employed to obtain the binary result depends on the type of the input image (scanned document, scene text image, historical degraded document etc.).
  • Line removal – Cleans up non-glyph boxes and lines
  • Layout analysis or "zoning" – Identifies columns, paragraphs, captions, etc. as distinct blocks. Especially important in multi-column layouts and tables.
  • Line and word detection – Establishes baseline for word and character shapes, separates words if necessary.
  • Script recognition – In multilingual documents, the script may change at the level of the words and hence, identification of the script is necessary, before the right OCR can be invoked to handle the specific script.
  • Character isolation or "segmentation" – For per-character OCR, multiple characters that are connected due to image artifacts must be separated; single characters that are broken into multiple pieces due to artifacts must be connected.
  • Normalize aspect ratio and scale
Segmentation of fixed-pitch fonts is accomplished relatively simply by aligning the image to a uniform grid based on where vertical grid lines will least often intersect black areas. For proportional fonts, more sophisticated techniques are needed because whitespace between letters can sometimes be greater than that between words, and vertical lines can intersect more than one character.

Text recognition

There are two basic types of core OCR algorithm, which may produce a ranked list of candidate characters.

Matrix matching involves comparing an image to a stored glyph on a pixel-by-pixel basis; it is also known as "pattern matching", "pattern recognition", or "image correlation". This relies on the input glyph being correctly isolated from the rest of the image, and on the stored glyph being in a similar font and at the same scale. This technique works best with typewritten text and does not work well when new fonts are encountered. This is the technique the early physical photocell-based OCR implemented, rather directly. 

Feature extraction decomposes glyphs into "features" like lines, closed loops, line direction, and line intersections. The extraction features reduces the dimensionality of the representation and makes the recognition process computationally efficient. These features are compared with an abstract vector-like representation of a character, which might reduce to one or more glyph prototypes. General techniques of feature detection in computer vision are applicable to this type of OCR, which is commonly seen in "intelligent" handwriting recognition and indeed most modern OCR software. Nearest neighbour classifiers such as the k-nearest neighbors algorithm are used to compare image features with stored glyph features and choose the nearest match.

Software such as Cuneiform and Tesseract use a two-pass approach to character recognition. The second pass is known as "adaptive recognition" and uses the letter shapes recognized with high confidence on the first pass to recognize better the remaining letters on the second pass. This is advantageous for unusual fonts or low-quality scans where the font is distorted (e.g. blurred or faded).

Modern OCR software like for example OCRopus or Tesseract uses neural networks which were trained to recognize whole lines of text instead of focusing on single characters. 

The OCR result can be stored in the standardized ALTO format, a dedicated XML schema maintained by the United States Library of Congress. Other common formats include hOCR and PAGE XML.

For a list of optical character recognition software see Comparison of optical character recognition software.

Post-processing

OCR accuracy can be increased if the output is constrained by a lexicon – a list of words that are allowed to occur in a document. This might be, for example, all the words in the English language, or a more technical lexicon for a specific field. This technique can be problematic if the document contains words not in the lexicon, like proper nouns. Tesseract uses its dictionary to influence the character segmentation step, for improved accuracy.

The output stream may be a plain text stream or file of characters, but more sophisticated OCR systems can preserve the original layout of the page and produce, for example, an annotated PDF that includes both the original image of the page and a searchable textual representation.

"Near-neighbor analysis" can make use of co-occurrence frequencies to correct errors, by noting that certain words are often seen together. For example, "Washington, D.C." is generally far more common in English than "Washington DOC".

Knowledge of the grammar of the language being scanned can also help determine if a word is likely to be a verb or a noun, for example, allowing greater accuracy.

The Levenshtein Distance algorithm has also been used in OCR post-processing to further optimize results from an OCR API.

Application-specific optimizations

In recent years, the major OCR technology providers began to tweak OCR systems to deal more efficiently with specific types of input. Beyond an application-specific lexicon, better performance may be had by taking into account business rules, standard expression, or rich information contained in color images. This strategy is called "Application-Oriented OCR" or "Customized OCR", and has been applied to OCR of license plates, invoices, screenshots, ID cards, driver licenses, and automobile manufacturing.




The New York Times has adapted the OCR technology into a proprietary tool they entitle, Document Helper, that enables their interactive news team to accelerate the processing of documents that need to be reviewed. They note that it enables them to process what amounts to as many as 5,400 pages per hour in preparation for reporters to review the contents.

Workarounds

There are several techniques for solving the problem of character recognition by means other than improved OCR algorithms.

Forcing better input

Special fonts like OCR-A, OCR-B, or MICR fonts, with precisely specified sizing, spacing, and distinctive character shapes, allow a higher accuracy rate during transcription in bank check processing. Ironically however, several prominent OCR engines were designed to capture text in popular fonts such as Arial or Times New Roman, and are incapable of capturing text in these fonts that are specialized and much different from popularly used fonts. As Google Tesseract can be trained to recognize new fonts, it can recognize OCR-A, OCR-B and MICR fonts.

"Comb fields" are pre-printed boxes that encourage humans to write more legibly – one glyph per box. These are often printed in a "dropout color" which can be easily removed by the OCR system.

Palm OS used a special set of glyphs, known as "Graffiti" which are similar to printed English characters but simplified or modified for easier recognition on the platform's computationally limited hardware. Users would need to learn how to write these special glyphs.

Zone-based OCR restricts the image to a specific part of a document. This is often referred to as "Template OCR".

Crowdsourcing

Crowdsourcing humans to perform the character recognition can quickly process images like computer-driven OCR, but with higher accuracy for recognizing images than is obtained with computers. Practical systems include the Amazon Mechanical Turk and reCAPTCHA. The National Library of Finland has developed an online interface for users to correct OCRed texts in the standardized ALTO format. Crowd sourcing has also been used not to perform character recognition directly but to invite software developers to develop image processing algorithms, for example, through the use of rank-order tournaments.

Accuracy

Commissioned by the U.S. Department of Energy (DOE), the Information Science Research Institute (ISRI) had the mission to foster the improvement of automated technologies for understanding machine printed documents, and it conducted the most authoritative of the Annual Test of OCR Accuracy from 1992 to 1996.

Recognition of Latin-script, typewritten text is still not 100% accurate even where clear imaging is available. One study based on recognition of 19th- and early 20th-century newspaper pages concluded that character-by-character OCR accuracy for commercial OCR software varied from 81% to 99%; total accuracy can be achieved by human review or Data Dictionary Authentication. Other areas—including recognition of hand printing, cursive handwriting, and printed text in other scripts (especially those East Asian language characters which have many strokes for a single character)—are still the subject of active research. The MNIST database is commonly used for testing systems' ability to recognise handwritten digits.

Accuracy rates can be measured in several ways, and how they are measured can greatly affect the reported accuracy rate. For example, if word context (basically a lexicon of words) is not used to correct software finding non-existent words, a character error rate of 1% (99% accuracy) may result in an error rate of 5% (95% accuracy) or worse if the measurement is based on whether each whole word was recognized with no incorrect letters.

An example of the difficulties inherent in digitizing old text is the inability of OCR to differentiate between the "long s" and "f" characters.

Web-based OCR systems for recognizing hand-printed text on the fly have become well known as commercial products in recent years. Accuracy rates of 80% to 90% on neat, clean hand-printed characters can be achieved by pen computing software, but that accuracy rate still translates to dozens of errors per page, making the technology useful only in very limited applications.

Recognition of cursive text is an active area of research, with recognition rates even lower than that of hand-printed text. Higher rates of recognition of general cursive script will likely not be possible without the use of contextual or grammatical information. For example, recognizing entire words from a dictionary is easier than trying to parse individual characters from script. Reading the Amount line of a cheque (which is always a written-out number) is an example where using a smaller dictionary can increase recognition rates greatly. The shapes of individual cursive characters themselves simply do not contain enough information to accurately (greater than 98%) recognize all handwritten cursive script.

Most programs allow users to set "confidence rates". This means that if the software does not achieve their desired level of accuracy, a user can be notified for manual review.

An error introduced by OCR scanning is sometimes termed a "scanno" (by analogy with the term "typo").

Digital microscope

From Wikipedia, the free encyclopedia
 
An insect observed with a digital microscope.
 
A digital microscope is a variation of a traditional optical microscope that uses optics and a digital camera to output an image to a monitor, sometimes by means of software running on a computer. A digital microscope often has its own in-built LED light source, and differs from an optical microscope in that there is no provision to observe the sample directly through an eyepiece. Since the image is focused on the digital circuit, the entire system is designed for the monitor image. The optics for the human eye are omitted.




Digital microscopes can range from cheap USB digital microscopes to advanced industrial digital microscopes costing tens of thousands of dollars. The low price commercial microscopes normally omit the optics for illumination (for example Köhler illumination and phase contrast illumination) and are more akin to webcams with a macro lens. For information about stereo microscopes with a digital camera in research and development, see optical microscope.

History

An early digital microscope was made by a company in Tokyo, Japan in 1986, which is now known as Hirox Co. LTD. It included a control box and a lens connected to a computer. The original connection to the computer was analog through an S-video connection. Over time that connection was changed to Firewire 800 to handle a large amount of digital information coming from the digital camera. Around 2005 they introduced advanced all-in-one units that did not require a computer, but had the monitor and computer built-in. Then in late 2015 they released a system that once again had the computer separate, but connected to the computer by USB 3.0, taking advantage of the speed and longevity of the USB connection. This system also was much more compacted than previous models with a reduction in the number of cables and physical size of the unit itself.

A digital microscope allows several students in Laos to examine insect parts. This model cost about USD 150.
 
The invention of the USB port resulted in a multitude of USB microscopes ranging in quality and magnification. They continue to fall in price, especially compared with traditional optical microscopes. They offer high-resolution images which are normally recorded directly to a computer, and which also use the computer power for their built-in LED light source. The resolution is directly related to the number of megapixels available on a specific model, from 1.3 MP, 2 MP, 5 MP and upwards.

Stereo and digital microscopes

A primary difference between a stereo microscope and a digital microscope is the magnification. With a stereo microscope, the magnification is determined by multiplying the eyepiece magnification times the objective magnification. Since the digital microscope does not have an eyepiece, the magnification cannot be found using this method. Instead the magnification for a digital microscope was originally determined by how many times larger the sample was reproduced on a 15” monitor. While monitor sizes have changed, the physical size of the camera chip used has not. As a result magnification numbers and field of view are still the same as that original definition, regardless of the size of the monitor used. The average difference in magnification between an optical microscope and a digital microscope is about 40%. Thus the magnification number of a stereomicroscope is usually 40% less than the magnification number of a digital microscope.

Since the digital microscope has the image projected directly on to the CCD camera, it is possible to have higher quality recorded images than with a stereo microscope. With the stereo microscope, the lenses are made for the optics of the eye. Attaching a CCD camera to a stereo microscope will result in an image that has compromises made for the eyepiece. Although the monitor image and recorded image may be of higher quality with the digital microscope, the application of the microscope may dictate which microscope is preferred.

Digital eyepiece for microscopes

Digital eyepiece for microscopes Software contain wide ranges of optional accessories provides multipurpose such as phase contrast observation, Bright and dark field observation, microphotography, image processing, particle size determination in µm, pathological report and patient manager, microphotograph, recording mobility video, drawing and labeling etc.

Resolution

With a typical 2 megapixel CCD, a 1600×1200 pixels image is generated. The resolution of the image depends on the field of view of the lens used with the camera. The approximate pixel resolution can be determined by dividing the horizontal field of view (FOV) by 1600. 

Increased resolution can be accomplished by creating a sub-pixel image. The Pixel Shift Method uses an actuator to physically move the CCD in order to take multiple overlapping images. By combining the images within the microscope, sub-pixel resolution can be generated. This method provides sub-pixel information, averaging a standard image is also a proven method to provide sub-pixel information.

2D measurement

Most of the high-end digital microscope systems have the ability to measure samples in 2D. The measurements are done onscreen by measuring the distance from pixel to pixel. This allows for length, width, diagonal, and circle measurements as well as much more. Some systems are even capable of counting particles.

3D measurement

3D measurement is achieved with a digital microscope by image stacking. Using a step motor, the system takes images from the lowest focal plane in the field of view to the highest focal plane. Then it reconstructs these images into a 3D model based on contrast to give a 3D color image of the sample. From these 3D model measurements can be made, but their accuracy is based on the step motor and depth of field of the lens.

2D and 3D tiling

2D and 3D tiling, also known as stitching or creating a panoramic, can now be done with the more advanced digital microscope systems. In 2D tiling the image is automatically tiled together seamlessly in real-time by moving the XY stage. 3D tiling combines the XY stage movement of 2D tiling with the Z-axis movement of 3D measurement to create a 3D panoramic.

USB microscopes

A miniature USB microscope

Digital microscopes range from inexpensive units costing from perhaps US$20, which connect to a computer via USB connector, to units costing tens of thousands of dollars. These advanced digital microscope systems usually are self-contained and do not require a computer.

Sea salt crystals
 
Table salt crystals, with cubic habit
 
Some of the cheaper microscopes which connect via USB have no stand, or a simple stand with clampable joints. They are essentially very simple webcams with small lenses and sensors—and can be used to view subjects which are not very close to the lens— mechanically arranged to allow focus at very close distances. Magnification is typically claimed to be user-adjustable from 10× to 200-400×.

Devices which connect to a computer require software to operate. The basic operation includes viewing the microscope image and recording "snapshots". More advanced functionality, possible even with simpler devices, includes recording moving images, time-lapse photography, measurement, image enhancement, annotation, etc. Many of the simpler units which connect to a computer use standard operating system facilities, and do not require device-specific drivers. A consequence of this is that many different microscope software packages can be used interchangeably with different microscopes, although such software may not support features unique to the more advanced devices. Basic operation may be possible with software included as part of computer operating systems—in Windows XP, images from microscopes which do not require special drivers can be viewed and recorded from "Scanners and Cameras" in Control Panel.

The more advanced digital microscope units have stands that hold the microscope and allow it to be racked up and down, similarly to standard optical microscopes. Calibrated movement in all three dimensions are available through the use of a step motor and automaded stage. The resolution, image quality, and dynamic range vary with price. Systems with a lower number of pixels have a higher frame rate (30fps to 100fps) and faster processing. The faster processing can be seen when using functions like HDR (high dynamic range). In addition to general-purpose microscopes, instruments specialized for specific applications are produced. These units can have a magnification range up to 0-10,000x, are either all-in-one systems (computer built-in) or connect to a desktop computer. They also differ from the cheaper USB microscopes in not only the quality of the image, but also in capability, and the quality of the system's construction giving these types of systems a longer lifetime.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...