Search This Blog

Wednesday, September 2, 2020

Digital library

From Wikipedia, the free encyclopedia
 
A digital library, digital repository, or digital collection, is an online database of digital objects that can include text, still images, audio, video, digital documents, or other digital media formats. Objects can consist of digitized content like print or photographs, as well as originally produced digital content like word processor files or social media posts. In addition to storing content, digital libraries provide means for organizing, searching, and retrieving the content contained in the collection.

Digital libraries can vary immensely in size and scope, and can be maintained by individuals or organizations. The digital content may be stored locally, or accessed remotely via computer networks. These information retrieval systems are able to exchange information with each other through interoperability and sustainability.

History

The early history of digital libraries is not well documented, but several key thinkers are connected to the emergence of the concept. Predecessors include Paul Otlet and Henri La Fontaine's Mundaneum, an attempt begun in 1895 to gather and systematically catalogue the world's knowledge, with the hope of bringing about world peace. The visions of the digital library were largely realized a century later during the great expansion of the Internet, with access to the books and searching of the documents by millions of individuals on the World Wide Web. 

Vannevar Bush and J.C.R. Licklider are two contributors that advanced this idea into then current technology. Bush had supported research that led to the bomb that was dropped on Hiroshima. After seeing the disaster, he wanted to create a machine that would show how technology can lead to understanding instead of destruction. This machine would include a desk with two screens, switches and buttons, and a keyboard. He named this the "Memex." This way individuals would be able to access stored books and files at a rapid speed. In 1956, Ford Foundation funded Licklider to analyze how libraries could be improved with technology. Almost a decade later, his book entitled "Libraries of the Future" included his vision. He wanted to create a system that would use computers and networks so human knowledge would be accessible for human needs and feedback would be automatic for machine purposes. This system contained three components, the corpus of knowledge, the question, and the answer. Licklider called it a procognitive system. 

Early projects centered on the creation of an electronic card catalogue known as Online Public Access Catalog (OPAC). By the 1980s, the success of these endeavors resulted in OPAC replacing the traditional card catalog in many academic, public and special libraries. This permitted libraries to undertake additional rewarding co-operative efforts to support resource sharing and expand access to library materials beyond an individual library. 

An early example of a digital library is the Education Resources Information Center (ERIC), a database of education citations, abstracts and texts that was created in 1964 and made available online through DIALOG in 1969.

In 1994, digital libraries became widely visible in the research community due to a $24.4 million NSF managed program supported jointly by DARPA's Intelligent Integration of Information (I3) program, NASA, and NSF itself  . Successful research proposals came from six U.S. universities.  The universities included Carnegie Mellon University, University of California-Berkeley, University of Michigan, University of Illinois, University of California-Santa Barbara, and Stanford University. Articles from the projects summarized their progress at their halfway point in May 1996. Stanford research, by Sergey Brin and Larry Page led to the founding of Google.

Early attempts at creating a model for digital libraries included the DELOS Digital Library Reference Model and the 5S Framework.

Terminology

The term digital library was first popularized by the NSF/DARPA/NASA Digital Libraries Initiative in 1994. With the availability of the computer networks the information resources are expected to stay distributed and accessed as needed, whereas in Vannevar Bush's essay As We May Think (1945) they were to be collected and kept within the researcher's Memex.

The term virtual library was initially used interchangeably with digital library, but is now primarily used for libraries that are virtual in other senses (such as libraries which aggregate distributed content). In the early days of digital libraries, there was discussion of the similarities and differences among the terms digital, virtual, and electronic.

A distinction is often made between content that was created in a digital format, known as born-digital, and information that has been converted from a physical medium, e.g. paper, through digitization. Not all electronic content is in digital data format. The term hybrid library is sometimes used for libraries that have both physical collections and electronic collections. For example, American Memory is a digital library within the Library of Congress.

Some important digital libraries also serve as long term archives, such as arXiv and the Internet Archive. Others, such as the Digital Public Library of America, seek to make digital information from various institutions widely accessible online.

Types of digital libraries

Institutional repositories

Many academic libraries are actively involved in building institutional repositories of the institution's books, papers, theses, and other works which can be digitized or were 'born digital'. Many of these repositories are made available to the general public with few restrictions, in accordance with the goals of open access, in contrast to the publication of research in commercial journals, where the publishers often limit access rights. Institutional, truly free, and corporate repositories are sometimes referred to as digital libraries. Institutional repository software is designed for archiving, organizing, and searching a library's content. Popular open-source solutions include DSpace, EPrints, Digital Commons, and Fedora Commons-based systems Islandora and Samvera.

National library collections

Legal deposit is often covered by copyright legislation and sometimes by laws specific to legal deposit, and requires that one or more copies of all material published in a country should be submitted for preservation in an institution, typically the national library. Since the advent of electronic documents, legislation has had to be amended to cover the new formats, such as the 2016 amendment to the Copyright Act 1968 in Australia.

Since then various types of electronic depositories have been built. The British Library’s Publisher Submission Portal and the German model at the Deutsche Nationalbibliothek have one deposit point for a network of libraries, but public access is only available in the reading rooms in the libraries. The Australian National edeposit system has the same features, but also allows for remote access by the general public for most of the content.

Digital archives

Physical archives differ from physical libraries in several ways. Traditionally, archives are defined as:
  1. Containing primary sources of information (typically letters and papers directly produced by an individual or organization) rather than the secondary sources found in a library (books, periodicals, etc.).
  2. Having their contents organized in groups rather than individual items.
  3. Having unique contents.
The technology used to create digital libraries is even more revolutionary for archives since it breaks down the second and third of these general rules. In other words, "digital archives" or "online archives" will still generally contain primary sources, but they are likely to be described individually rather than (or in addition to) in groups or collections. Further, because they are digital, their contents are easily reproducible and may indeed have been reproduced from elsewhere. The Oxford Text Archive is generally considered to be the oldest digital archive of academic physical primary source materials.

Archives differ from libraries in the nature of the materials held. Libraries collect individual published books and serials, or bounded sets of individual items. The books and journals held by libraries are not unique, since multiple copies exist and any given copy will generally prove as satisfactory as any other copy. The material in archives and manuscript libraries are "the unique records of corporate bodies and the papers of individuals and families".

A fundamental characteristic of archives is that they have to keep the context in which their records have been created and the network of relationships between them in order to preserve their informative content and provide understandable and useful information over time. The fundamental characteristic of archives resides in their hierarchical organization expressing the context by means of the archival bond. Archival descriptions are the fundamental means to describe, understand, retrieve and access archival material. At the digital level, archival descriptions are usually encoded by means of the Encoded Archival Description XML format. The EAD is a standardized electronic representation of archival description which makes it possible to provide union access to detailed archival descriptions and resources in repositories distributed throughout the world.

Given the importance of archives, a dedicated formal model, called NEsted SeTs for Object Hierarchies (NESTOR), built around their peculiar constituents, has been defined. NESTOR is based on the idea of expressing the hierarchical relationships between objects through the inclusion property between sets, in contrast to the binary relation between nodes exploited by the tree. NESTOR has been used to formally extend the 5S model to define a digital archive as a specific case of digital library able to take into consideration the peculiar features of archives.

Features of digital libraries

The advantages of digital libraries as a means of easily and rapidly accessing books, archives and images of various types are now widely recognized by commercial interests and public bodies alike.

Traditional libraries are limited by storage space; digital libraries have the potential to store much more information, simply because digital information requires very little physical space to contain it. As such, the cost of maintaining a digital library can be much lower than that of a traditional library. A physical library must spend large sums of money paying for staff, book maintenance, rent, and additional books. Digital libraries may reduce or, in some instances, do away with these fees. Both types of library require cataloging input to allow users to locate and retrieve material. Digital libraries may be more willing to adopt innovations in technology providing users with improvements in electronic and audio book technology as well as presenting new forms of communication such as wikis and blogs; conventional libraries may consider that providing online access to their OP AC catalog is sufficient. An important advantage to digital conversion is increased accessibility to users. They also increase availability to individuals who may not be traditional patrons of a library, due to geographic location or organizational affiliation.
  • No physical boundary. The user of a digital library need not to go to the library physically; people from all over the world can gain access to the same information, as long as an Internet connection is available.
  • Round the clock availability A major advantage of digital libraries is that people can gain access 24/7 to the information.
  • Multiple access. The same resources can be used simultaneously by a number of institutions and patrons. This may not be the case for copyrighted material: a library may have a license for "lending out" only one copy at a time; this is achieved with a system of digital rights management where a resource can become inaccessible after expiration of the lending period or after the lender chooses to make it inaccessible (equivalent to returning the resource).
  • Information retrieval. The user is able to use any search term (word, phrase, title, name, subject) to search the entire collection. Digital libraries can provide very user-friendly interfaces, giving click able access to its resources.
  • Preservation and conservation. Digitization is not a long-term preservation solution for physical collections, but does succeed in providing access copies for materials that would otherwise fall to degradation from repeated use. Digitized collections and born-digital objects pose many preservation and conservation concerns that analog materials do not. Please see the following "Problems" section of this page for examples.
  • Space. Whereas traditional libraries are limited by storage space, digital libraries have the potential to store much more information, simply because digital information requires very little physical space to contain them and media storage technologies are more affordable than ever before.
  • Added value. Certain characteristics of objects, primarily the quality of images, may be improved. Digitization can enhance legibility and remove visible flaws such as stains and discoloration.
  • Easily accessible.

Software

There are a number of software packages for use in general digital libraries, for notable ones see Digital library software. Institutional repository software, which focuses primarily on ingest, preservation and access of locally produced documents, particularly locally produced academic outputs, can be found in Institutional repository software. This software may be proprietary, as is the case with the Library of Congress which uses Digiboard and CTS to manage digital content.

The design and implementation in digital libraries are constructed so computer systems and software can make use of the information when it is exchanged. These are referred to as semantic digital libraries. Semantic libraries are also used to socialize with different communities from a mass of social networks. DjDL is a type of semantic digital library. Keywords-based and semantic search are the two main types of searches. A tool is provided in the semantic search that create a group for augmentation and refinement for keywords-based search. Conceptual knowledge used in DjDL is centered around two forms; the subject ontology and the set of concept search patterns based on the ontology. The three type of ontologies that are associated to this search are bibliographic ontologies, community-aware ontologies, and subject ontologies.

Metadata

In traditional libraries, the ability to find works of interest is directly related to how well they were cataloged. While cataloging electronic works digitized from a library's existing holding may be as simple as copying or moving a record from the print to the electronic form, complex and born-digital works require substantially more effort. To handle the growing volume of electronic publications, new tools and technologies have to be designed to allow effective automated semantic classification and searching. While full-text search can be used for some items, there are many common catalog searches which cannot be performed using full text, including:
  • finding texts which are translations of other texts
  • differentiating between editions/volumes of a text/periodical
  • inconsistent descriptors (especially subject headings)
  • missing, deficient or poor quality taxonomy practices
  • linking texts published under pseudonyms to the real authors (Samuel Clemens and Mark Twain, for example)
  • differentiating non-fiction from parody (The Onion from The New York Times)

Searching

Most digital libraries provide a search interface which allows resources to be found. These resources are typically deep web (or invisible web) resources since they frequently cannot be located by search engine crawlers. Some digital libraries create special pages or sitemaps to allow search engines to find all their resources. Digital libraries frequently use the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) to expose their metadata to other digital libraries, and search engines like Google Scholar, Yahoo! and Scirus can also use OAI-PMH to find these deep web resources.

There are two general strategies for searching a federation of digital libraries: distributed searching and searching previously harvested metadata.

Distributed searching typically involves a client sending multiple search requests in parallel to a number of servers in the federation. The results are gathered, duplicates are eliminated or clustered, and the remaining items are sorted and presented back to the client. Protocols like Z39.50 are frequently used in distributed searching. A benefit to this approach is that the resource-intensive tasks of indexing and storage are left to the respective servers in the federation. A drawback to this approach is that the search mechanism is limited by the different indexing and ranking capabilities of each database; therefore, making it difficult to assemble a combined result consisting of the most relevant found items.

Searching over previously harvested metadata involves searching a locally stored index of information that has previously been collected from the libraries in the federation. When a search is performed, the search mechanism does not need to make connections with the digital libraries it is searching - it already has a local representation of the information. This approach requires the creation of an indexing and harvesting mechanism which operates regularly, connecting to all the digital libraries and querying the whole collection in order to discover new and updated resources. OAI-PMH is frequently used by digital libraries for allowing metadata to be harvested. A benefit to this approach is that the search mechanism has full control over indexing and ranking algorithms, possibly allowing more consistent results. A drawback is that harvesting and indexing systems are more resource-intensive and therefore expensive.

Digital preservation

Digital preservation aims to ensure that digital media and information systems are still interpretable into the indefinite future. Each necessary component of this must be migrated, preserved or emulated. Typically lower levels of systems (floppy disks for example) are emulated, bit-streams (the actual files stored in the disks) are preserved and operating systems are emulated as a virtual machine. Only where the meaning and content of digital media and information systems are well understood is migration possible, as is the case for office documents. However, at least one organization, the Wider Net Project, has created an offline digital library, the eGranary, by reproducing materials on a 6 TB hard drive. Instead of a bit-stream environment, the digital library contains a built-in proxy server and search engine so the digital materials can be accessed using an Internet browser. Also, the materials are not preserved for the future. The eGranary is intended for use in places or situations where Internet connectivity is very slow, non-existent, unreliable, unsuitable or too expensive.

In the past few years, procedures for digitizing books at high speed and comparatively low cost have improved considerably with the result that it is now possible to digitize millions of books per year. Google book-scanning project is also working with libraries to offer digitize books pushing forward on the digitize book realm.

Copyright and licensing

Digital libraries are hampered by copyright law because, unlike with traditional printed works, the laws of digital copyright are still being formed. The republication of material on the web by libraries may require permission from rights holders, and there is a conflict of interest between libraries and the publishers who may wish to create online versions of their acquired content for commercial purposes. In 2010, it was estimated that twenty-three percent of books in existence were created before 1923 and thus out of copyright. Of those printed after this date, only five percent were still in print as of 2010. Thus, approximately seventy-two percent of books were not available to the public.
There is a dilution of responsibility that occurs as a result of the distributed nature of digital resources. Complex intellectual property matters may become involved since digital material is not always owned by a library. The content is, in many cases, public domain or self-generated content only. Some digital libraries, such as Project Gutenberg, work to digitize out-of-copyright works and make them freely available to the public. An estimate of the number of distinct books still existent in library catalogues from 2000 BC to 1960, has been made.

The Fair Use Provisions (17 USC § 107) under the Copyright Act of 1976 provide specific guidelines under which circumstances libraries are allowed to copy digital resources. Four factors that constitute fair use are "Purpose of the use, Nature of the work, Amount or substantiality used and Market impact."

Some digital libraries acquire a license to lend their resources. This may involve the restriction of lending out only one copy at a time for each license, and applying a system of digital rights management for this purpose (see also above). 

The Digital Millennium Copyright Act of 1998 was an act created in the United States to attempt to deal with the introduction of digital works. This Act incorporates two treaties from the year 1996. It criminalizes the attempt to circumvent measures which limit access to copyrighted materials. It also criminalizes the act of attempting to circumvent access control. This act provides an exemption for nonprofit libraries and archives which allows up to three copies to be made, one of which may be digital. This may not be made public or distributed on the web, however. Further, it allows libraries and archives to copy a work if its format becomes obsolete.

Copyright issues persist. As such, proposals have been put forward suggesting that digital libraries be exempt from copyright law. Although this would be very beneficial to the public, it may have a negative economic effect and authors may be less inclined to create new works.

Another issue that complicates matters is the desire of some publishing houses to restrict the use of digit materials such as e-books purchased by libraries. Whereas with printed books, the library owns the book until it can no longer be circulated, publishers want to limit the number of times an e-book can be checked out before the library would need to repurchase that book. "[HarperCollins] began licensing use of each e-book copy for a maximum of 26 loans. This affects only the most popular titles and has no practical effect on others. After the limit is reached, the library can repurchase access rights at a lower cost than the original price." While from a publishing perspective, this sounds like a good balance of library lending and protecting themselves from a feared decrease in book sales, libraries are not set up to monitor their collections as such. They acknowledge the increased demand of digital materials available to patrons and the desire of a digital library to become expanded to include best sellers, but publisher licensing may hinder the process.

Recommendation systems

Many digital libraries offer recommender systems to reduce information overload and help their users discovering relevant literature. Some examples of digital libraries offering recommender systems are IEEE Xplore, Europeana, and GESIS Sowiport. The recommender systems work mostly based on content-based filtering but also other approaches are used such as collaborative filtering and citation-based recommendations. Beel et al. report that there are more than 90 different recommendation approaches for digital libraries, presented in more than 200 research articles.

Typically, digital libraries develop and maintain their own recommender systems based on existing search and recommendation frameworks such as Apache Lucene or Apache Mahout. However, there are also some recommendation-as-a-service provider specializing in offering a recommender system for digital libraries as a service.

Drawbacks of digital libraries

Digital libraries, or at least their digital collections, unfortunately also have brought their own problems and challenges in areas such as:
There are many large scale digitisation projects that perpetuate these problems.

Future development

Large scale digitization projects are underway at Google, the Million Book Project, and Internet Archive. With continued improvements in book handling and presentation technologies such as optical character recognition and development of alternative depositories and business models, digital libraries are rapidly growing in popularity. Just as libraries have ventured into audio and video collections, so have digital libraries such as the Internet Archive. Google Books project recently received a court victory on proceeding with their book-scanning project that was halted by the Authors' guild. This helped open the road for libraries to work with Google to better reach patrons who are accustomed to computerized information. 

According to Larry Lannom, Director of Information Management Technology at the nonprofit Corporation for National Research Initiatives (CNRI), "all the problems associated with digital libraries are wrapped up in archiving." He goes on to state, "If in 100 years people can still read your article, we'll have solved the problem." Daniel Akst, author of The Webster Chronicle, proposes that "the future of libraries — and of information — is digital." Peter Lyman and Hal Variant, information scientists at the University of California, Berkeley, estimate that "the world's total yearly production of print, film, optical, and magnetic content would require roughly 1.5 billion gigabytes of storage." Therefore, they believe that "soon it will be technologically possible for an average person to access virtually all recorded information."

Collection development and content selection decisions for the libraries' electronic resources typically involve various qualitative and quantitative methods. In the 2020s, libraries have expanded the usage of open source data analysis strumentation like the non-profit Unpaywall Journals which combines several methods.

Optical character recognition

From Wikipedia, the free encyclopedia
 
Video of the process of scanning and real-time optical character recognition (OCR) with a portable scanner.

Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo (for example the text on signs and billboards in a landscape photo) or from subtitle text superimposed on an image (for example: from a television broadcast).

Widely used as a form of data entry from printed paper data records – whether passport documents, invoices, bank statements, computerized receipts, business cards, mail, printouts of static-data, or any suitable documentation – it is a common method of digitizing printed texts so that they can be electronically edited, searched, stored more compactly, displayed on-line, and used in machine processes such as cognitive computing, machine translation, (extracted) text-to-speech, key data and text mining. OCR is a field of research in pattern recognition, artificial intelligence and computer vision.

Early versions needed to be trained with images of each character, and worked on one font at a time. Advanced systems capable of producing a high degree of recognition accuracy for most fonts are now common, and with support for a variety of digital image file format inputs. Some systems are capable of reproducing formatted output that closely approximates the original page including images, columns, and other non-textual components.

History

Early optical character recognition may be traced to technologies involving telegraphy and creating reading devices for the blind. In 1914, Emanuel Goldberg developed a machine that read characters and converted them into standard telegraph code. Concurrently, Edmund Fournier d'Albe developed the Optophone, a handheld scanner that when moved across a printed page, produced tones that corresponded to specific letters or characters.

In the late 1920s and into the 1930s Emanuel Goldberg developed what he called a "Statistical Machine" for searching microfilm archives using an optical code recognition system. In 1931 he was granted USA Patent number 1,838,389 for the invention. The patent was acquired by IBM.

Blind and visually impaired users

In 1974, Ray Kurzweil started the company Kurzweil Computer Products, Inc. and continued development of omni-font OCR, which could recognize text printed in virtually any font (Kurzweil is often credited with inventing omni-font OCR, but it was in use by companies, including CompuScan, in the late 1960s and 1970s). Kurzweil decided that the best application of this technology would be to create a reading machine for the blind, which would allow blind people to have a computer read text to them out loud. This device required the invention of two enabling technologies – the CCD flatbed scanner and the text-to-speech synthesizer. On January 13, 1976, the successful finished product was unveiled during a widely reported news conference headed by Kurzweil and the leaders of the National Federation of the Blind. In 1978, Kurzweil Computer Products began selling a commercial version of the optical character recognition computer program. LexisNexis was one of the first customers, and bought the program to upload legal paper and news documents onto its nascent online databases. Two years later, Kurzweil sold his company to Xerox, which had an interest in further commercializing paper-to-computer text conversion. Xerox eventually spun it off as Scansoft, which merged with Nuance Communications.

In the 2000s, OCR was made available online as a service (WebOCR), in a cloud computing environment, and in mobile applications like real-time translation of foreign-language signs on a smartphone. With the advent of smart-phones and smartglasses, OCR can be used in internet connected mobile device applications that extract text captured using the device's camera. These devices that do not have OCR functionality built into the operating system will typically use an OCR API to extract the text from the image file captured and provided by the device. The OCR API returns the extracted text, along with information about the location of the detected text in the original image back to the device app for further processing (such as text-to-speech) or display.

Various commercial and open source OCR systems are available for most common writing systems, including Latin, Cyrillic, Arabic, Hebrew, Indic, Bengali (Bangla), Devanagari, Tamil, Chinese, Japanese, and Korean characters.

Applications

OCR engines have been developed into many kinds of domain-specific OCR applications, such as receipt OCR, invoice OCR, check OCR, legal billing document OCR.
They can be used for:
  • Data entry for business documents, e.g. Cheque, passport, invoice, bank statement and receipt
  • Automatic number plate recognition
  • In airports, for passport recognition and information extraction
  • Automatic insurance documents key information extraction
  • Traffic sign recognition
  • Extracting business card information into a contact list
  • More quickly make textual versions of printed documents, e.g. book scanning for Project Gutenberg
  • Make electronic images of printed documents searchable, e.g. Google Books
  • Converting handwriting in real time to control a computer (pen computing)
  • Defeating CAPTCHA anti-bot systems, though these are specifically designed to prevent OCR. The purpose can also be to test the robustness of CAPTCHA anti-bot systems.
  • Assistive technology for blind and visually impaired users
  • Writing the instructions for vehicles by identifying CAD images in a database that are appropriate to the vehicle design as it changes in real time.
  • Making scanned documents searchable by converting them to searchable PDFs

Types

OCR is generally an "offline" process, which analyses a static document. There are cloud based services which provide an online OCR API service. Handwriting movement analysis can be used as input to handwriting recognition. Instead of merely using the shapes of glyphs and words, this technique is able to capture motions, such as the order in which segments are drawn, the direction, and the pattern of putting the pen down and lifting it. This additional information can make the end-to-end process more accurate. This technology is also known as "on-line character recognition", "dynamic character recognition", "real-time character recognition", and "intelligent character recognition".

Techniques

Pre-processing

OCR software often "pre-processes" images to improve the chances of successful recognition. Techniques include:
  • De-skew – If the document was not aligned properly when scanned, it may need to be tilted a few degrees clockwise or counterclockwise in order to make lines of text perfectly horizontal or vertical.
  • Despeckle – remove positive and negative spots, smoothing edges
  • Binarisation – Convert an image from color or greyscale to black-and-white (called a "binary image" because there are two colors). The task of binarisation is performed as a simple way of separating the text (or any other desired image component) from the background. The task of binarisation itself is necessary since most commercial recognition algorithms work only on binary images since it proves to be simpler to do so. In addition, the effectiveness of the binarisation step influences to a significant extent the quality of the character recognition stage and the careful decisions are made in the choice of the binarisation employed for a given input image type; since the quality of the binarisation method employed to obtain the binary result depends on the type of the input image (scanned document, scene text image, historical degraded document etc.).
  • Line removal – Cleans up non-glyph boxes and lines
  • Layout analysis or "zoning" – Identifies columns, paragraphs, captions, etc. as distinct blocks. Especially important in multi-column layouts and tables.
  • Line and word detection – Establishes baseline for word and character shapes, separates words if necessary.
  • Script recognition – In multilingual documents, the script may change at the level of the words and hence, identification of the script is necessary, before the right OCR can be invoked to handle the specific script.
  • Character isolation or "segmentation" – For per-character OCR, multiple characters that are connected due to image artifacts must be separated; single characters that are broken into multiple pieces due to artifacts must be connected.
  • Normalize aspect ratio and scale
Segmentation of fixed-pitch fonts is accomplished relatively simply by aligning the image to a uniform grid based on where vertical grid lines will least often intersect black areas. For proportional fonts, more sophisticated techniques are needed because whitespace between letters can sometimes be greater than that between words, and vertical lines can intersect more than one character.

Text recognition

There are two basic types of core OCR algorithm, which may produce a ranked list of candidate characters.

Matrix matching involves comparing an image to a stored glyph on a pixel-by-pixel basis; it is also known as "pattern matching", "pattern recognition", or "image correlation". This relies on the input glyph being correctly isolated from the rest of the image, and on the stored glyph being in a similar font and at the same scale. This technique works best with typewritten text and does not work well when new fonts are encountered. This is the technique the early physical photocell-based OCR implemented, rather directly. 

Feature extraction decomposes glyphs into "features" like lines, closed loops, line direction, and line intersections. The extraction features reduces the dimensionality of the representation and makes the recognition process computationally efficient. These features are compared with an abstract vector-like representation of a character, which might reduce to one or more glyph prototypes. General techniques of feature detection in computer vision are applicable to this type of OCR, which is commonly seen in "intelligent" handwriting recognition and indeed most modern OCR software. Nearest neighbour classifiers such as the k-nearest neighbors algorithm are used to compare image features with stored glyph features and choose the nearest match.

Software such as Cuneiform and Tesseract use a two-pass approach to character recognition. The second pass is known as "adaptive recognition" and uses the letter shapes recognized with high confidence on the first pass to recognize better the remaining letters on the second pass. This is advantageous for unusual fonts or low-quality scans where the font is distorted (e.g. blurred or faded).

Modern OCR software like for example OCRopus or Tesseract uses neural networks which were trained to recognize whole lines of text instead of focusing on single characters. 

The OCR result can be stored in the standardized ALTO format, a dedicated XML schema maintained by the United States Library of Congress. Other common formats include hOCR and PAGE XML.

For a list of optical character recognition software see Comparison of optical character recognition software.

Post-processing

OCR accuracy can be increased if the output is constrained by a lexicon – a list of words that are allowed to occur in a document. This might be, for example, all the words in the English language, or a more technical lexicon for a specific field. This technique can be problematic if the document contains words not in the lexicon, like proper nouns. Tesseract uses its dictionary to influence the character segmentation step, for improved accuracy.

The output stream may be a plain text stream or file of characters, but more sophisticated OCR systems can preserve the original layout of the page and produce, for example, an annotated PDF that includes both the original image of the page and a searchable textual representation.

"Near-neighbor analysis" can make use of co-occurrence frequencies to correct errors, by noting that certain words are often seen together. For example, "Washington, D.C." is generally far more common in English than "Washington DOC".

Knowledge of the grammar of the language being scanned can also help determine if a word is likely to be a verb or a noun, for example, allowing greater accuracy.

The Levenshtein Distance algorithm has also been used in OCR post-processing to further optimize results from an OCR API.

Application-specific optimizations

In recent years, the major OCR technology providers began to tweak OCR systems to deal more efficiently with specific types of input. Beyond an application-specific lexicon, better performance may be had by taking into account business rules, standard expression, or rich information contained in color images. This strategy is called "Application-Oriented OCR" or "Customized OCR", and has been applied to OCR of license plates, invoices, screenshots, ID cards, driver licenses, and automobile manufacturing.




The New York Times has adapted the OCR technology into a proprietary tool they entitle, Document Helper, that enables their interactive news team to accelerate the processing of documents that need to be reviewed. They note that it enables them to process what amounts to as many as 5,400 pages per hour in preparation for reporters to review the contents.

Workarounds

There are several techniques for solving the problem of character recognition by means other than improved OCR algorithms.

Forcing better input

Special fonts like OCR-A, OCR-B, or MICR fonts, with precisely specified sizing, spacing, and distinctive character shapes, allow a higher accuracy rate during transcription in bank check processing. Ironically however, several prominent OCR engines were designed to capture text in popular fonts such as Arial or Times New Roman, and are incapable of capturing text in these fonts that are specialized and much different from popularly used fonts. As Google Tesseract can be trained to recognize new fonts, it can recognize OCR-A, OCR-B and MICR fonts.

"Comb fields" are pre-printed boxes that encourage humans to write more legibly – one glyph per box. These are often printed in a "dropout color" which can be easily removed by the OCR system.

Palm OS used a special set of glyphs, known as "Graffiti" which are similar to printed English characters but simplified or modified for easier recognition on the platform's computationally limited hardware. Users would need to learn how to write these special glyphs.

Zone-based OCR restricts the image to a specific part of a document. This is often referred to as "Template OCR".

Crowdsourcing

Crowdsourcing humans to perform the character recognition can quickly process images like computer-driven OCR, but with higher accuracy for recognizing images than is obtained with computers. Practical systems include the Amazon Mechanical Turk and reCAPTCHA. The National Library of Finland has developed an online interface for users to correct OCRed texts in the standardized ALTO format. Crowd sourcing has also been used not to perform character recognition directly but to invite software developers to develop image processing algorithms, for example, through the use of rank-order tournaments.

Accuracy

Commissioned by the U.S. Department of Energy (DOE), the Information Science Research Institute (ISRI) had the mission to foster the improvement of automated technologies for understanding machine printed documents, and it conducted the most authoritative of the Annual Test of OCR Accuracy from 1992 to 1996.

Recognition of Latin-script, typewritten text is still not 100% accurate even where clear imaging is available. One study based on recognition of 19th- and early 20th-century newspaper pages concluded that character-by-character OCR accuracy for commercial OCR software varied from 81% to 99%; total accuracy can be achieved by human review or Data Dictionary Authentication. Other areas—including recognition of hand printing, cursive handwriting, and printed text in other scripts (especially those East Asian language characters which have many strokes for a single character)—are still the subject of active research. The MNIST database is commonly used for testing systems' ability to recognise handwritten digits.

Accuracy rates can be measured in several ways, and how they are measured can greatly affect the reported accuracy rate. For example, if word context (basically a lexicon of words) is not used to correct software finding non-existent words, a character error rate of 1% (99% accuracy) may result in an error rate of 5% (95% accuracy) or worse if the measurement is based on whether each whole word was recognized with no incorrect letters.

An example of the difficulties inherent in digitizing old text is the inability of OCR to differentiate between the "long s" and "f" characters.

Web-based OCR systems for recognizing hand-printed text on the fly have become well known as commercial products in recent years. Accuracy rates of 80% to 90% on neat, clean hand-printed characters can be achieved by pen computing software, but that accuracy rate still translates to dozens of errors per page, making the technology useful only in very limited applications.

Recognition of cursive text is an active area of research, with recognition rates even lower than that of hand-printed text. Higher rates of recognition of general cursive script will likely not be possible without the use of contextual or grammatical information. For example, recognizing entire words from a dictionary is easier than trying to parse individual characters from script. Reading the Amount line of a cheque (which is always a written-out number) is an example where using a smaller dictionary can increase recognition rates greatly. The shapes of individual cursive characters themselves simply do not contain enough information to accurately (greater than 98%) recognize all handwritten cursive script.

Most programs allow users to set "confidence rates". This means that if the software does not achieve their desired level of accuracy, a user can be notified for manual review.

An error introduced by OCR scanning is sometimes termed a "scanno" (by analogy with the term "typo").

Digital microscope

From Wikipedia, the free encyclopedia
 
An insect observed with a digital microscope.
 
A digital microscope is a variation of a traditional optical microscope that uses optics and a digital camera to output an image to a monitor, sometimes by means of software running on a computer. A digital microscope often has its own in-built LED light source, and differs from an optical microscope in that there is no provision to observe the sample directly through an eyepiece. Since the image is focused on the digital circuit, the entire system is designed for the monitor image. The optics for the human eye are omitted.




Digital microscopes can range from cheap USB digital microscopes to advanced industrial digital microscopes costing tens of thousands of dollars. The low price commercial microscopes normally omit the optics for illumination (for example Köhler illumination and phase contrast illumination) and are more akin to webcams with a macro lens. For information about stereo microscopes with a digital camera in research and development, see optical microscope.

History

An early digital microscope was made by a company in Tokyo, Japan in 1986, which is now known as Hirox Co. LTD. It included a control box and a lens connected to a computer. The original connection to the computer was analog through an S-video connection. Over time that connection was changed to Firewire 800 to handle a large amount of digital information coming from the digital camera. Around 2005 they introduced advanced all-in-one units that did not require a computer, but had the monitor and computer built-in. Then in late 2015 they released a system that once again had the computer separate, but connected to the computer by USB 3.0, taking advantage of the speed and longevity of the USB connection. This system also was much more compacted than previous models with a reduction in the number of cables and physical size of the unit itself.

A digital microscope allows several students in Laos to examine insect parts. This model cost about USD 150.
 
The invention of the USB port resulted in a multitude of USB microscopes ranging in quality and magnification. They continue to fall in price, especially compared with traditional optical microscopes. They offer high-resolution images which are normally recorded directly to a computer, and which also use the computer power for their built-in LED light source. The resolution is directly related to the number of megapixels available on a specific model, from 1.3 MP, 2 MP, 5 MP and upwards.

Stereo and digital microscopes

A primary difference between a stereo microscope and a digital microscope is the magnification. With a stereo microscope, the magnification is determined by multiplying the eyepiece magnification times the objective magnification. Since the digital microscope does not have an eyepiece, the magnification cannot be found using this method. Instead the magnification for a digital microscope was originally determined by how many times larger the sample was reproduced on a 15” monitor. While monitor sizes have changed, the physical size of the camera chip used has not. As a result magnification numbers and field of view are still the same as that original definition, regardless of the size of the monitor used. The average difference in magnification between an optical microscope and a digital microscope is about 40%. Thus the magnification number of a stereomicroscope is usually 40% less than the magnification number of a digital microscope.

Since the digital microscope has the image projected directly on to the CCD camera, it is possible to have higher quality recorded images than with a stereo microscope. With the stereo microscope, the lenses are made for the optics of the eye. Attaching a CCD camera to a stereo microscope will result in an image that has compromises made for the eyepiece. Although the monitor image and recorded image may be of higher quality with the digital microscope, the application of the microscope may dictate which microscope is preferred.

Digital eyepiece for microscopes

Digital eyepiece for microscopes Software contain wide ranges of optional accessories provides multipurpose such as phase contrast observation, Bright and dark field observation, microphotography, image processing, particle size determination in µm, pathological report and patient manager, microphotograph, recording mobility video, drawing and labeling etc.

Resolution

With a typical 2 megapixel CCD, a 1600×1200 pixels image is generated. The resolution of the image depends on the field of view of the lens used with the camera. The approximate pixel resolution can be determined by dividing the horizontal field of view (FOV) by 1600. 

Increased resolution can be accomplished by creating a sub-pixel image. The Pixel Shift Method uses an actuator to physically move the CCD in order to take multiple overlapping images. By combining the images within the microscope, sub-pixel resolution can be generated. This method provides sub-pixel information, averaging a standard image is also a proven method to provide sub-pixel information.

2D measurement

Most of the high-end digital microscope systems have the ability to measure samples in 2D. The measurements are done onscreen by measuring the distance from pixel to pixel. This allows for length, width, diagonal, and circle measurements as well as much more. Some systems are even capable of counting particles.

3D measurement

3D measurement is achieved with a digital microscope by image stacking. Using a step motor, the system takes images from the lowest focal plane in the field of view to the highest focal plane. Then it reconstructs these images into a 3D model based on contrast to give a 3D color image of the sample. From these 3D model measurements can be made, but their accuracy is based on the step motor and depth of field of the lens.

2D and 3D tiling

2D and 3D tiling, also known as stitching or creating a panoramic, can now be done with the more advanced digital microscope systems. In 2D tiling the image is automatically tiled together seamlessly in real-time by moving the XY stage. 3D tiling combines the XY stage movement of 2D tiling with the Z-axis movement of 3D measurement to create a 3D panoramic.

USB microscopes

A miniature USB microscope

Digital microscopes range from inexpensive units costing from perhaps US$20, which connect to a computer via USB connector, to units costing tens of thousands of dollars. These advanced digital microscope systems usually are self-contained and do not require a computer.

Sea salt crystals
 
Table salt crystals, with cubic habit
 
Some of the cheaper microscopes which connect via USB have no stand, or a simple stand with clampable joints. They are essentially very simple webcams with small lenses and sensors—and can be used to view subjects which are not very close to the lens— mechanically arranged to allow focus at very close distances. Magnification is typically claimed to be user-adjustable from 10× to 200-400×.

Devices which connect to a computer require software to operate. The basic operation includes viewing the microscope image and recording "snapshots". More advanced functionality, possible even with simpler devices, includes recording moving images, time-lapse photography, measurement, image enhancement, annotation, etc. Many of the simpler units which connect to a computer use standard operating system facilities, and do not require device-specific drivers. A consequence of this is that many different microscope software packages can be used interchangeably with different microscopes, although such software may not support features unique to the more advanced devices. Basic operation may be possible with software included as part of computer operating systems—in Windows XP, images from microscopes which do not require special drivers can be viewed and recorded from "Scanners and Cameras" in Control Panel.

The more advanced digital microscope units have stands that hold the microscope and allow it to be racked up and down, similarly to standard optical microscopes. Calibrated movement in all three dimensions are available through the use of a step motor and automaded stage. The resolution, image quality, and dynamic range vary with price. Systems with a lower number of pixels have a higher frame rate (30fps to 100fps) and faster processing. The faster processing can be seen when using functions like HDR (high dynamic range). In addition to general-purpose microscopes, instruments specialized for specific applications are produced. These units can have a magnification range up to 0-10,000x, are either all-in-one systems (computer built-in) or connect to a desktop computer. They also differ from the cheaper USB microscopes in not only the quality of the image, but also in capability, and the quality of the system's construction giving these types of systems a longer lifetime.

Thailand

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Thailand Thailand , officially the K...