Search This Blog

Saturday, September 21, 2024

Data recovery

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Data_recovery

In computing, data recovery is a process of retrieving deleted, inaccessible, lost, corrupted, damaged, or formatted data from secondary storage, removable media or files, when the data stored in them cannot be accessed in a usual way.  The data is most often salvaged from storage media such as internal or external hard disk drives (HDDs), solid-state drives (SSDs), USB flash drives, magnetic tapes, CDs, DVDs, RAID subsystems, and other electronic devices. Recovery may be required due to physical damage to the storage devices or logical damage to the file system that prevents it from being mounted by the host operating system (OS).

Logical failures occur when the hard drive devices are functional but the user or automated-OS cannot retrieve or access data stored on them. Logical failures can occur due to corruption of the engineering chip, lost partitions, firmware failure, or failures during formatting/re-installation.

Data recovery can be a very simple or technical challenge. This is why there are specific software companies specialized in this field.

About

The most common data recovery scenarios involve an operating system failure, malfunction of a storage device, logical failure of storage devices, accidental damage or deletion, etc. (typically, on a single-drive, single-partition, single-OS system), in which case the ultimate goal is simply to copy all important files from the damaged media to another new drive. This can be accomplished using a Live CD, or DVD by booting directly from a ROM or a USB drive instead of the corrupted drive in question. Many Live CDs or DVDs provide a means to mount the system drive and backup drives or removable media, and to move the files from the system drive to the backup media with a file manager or optical disc authoring software. Such cases can often be mitigated by disk partitioning and consistently storing valuable data files (or copies of them) on a different partition from the replaceable OS system files.

Another scenario involves a drive-level failure, such as a compromised file system or drive partition, or a hard disk drive failure. In any of these cases, the data is not easily read from the media devices. Depending on the situation, solutions involve repairing the logical file system, partition table, or master boot record, or updating the firmware or drive recovery techniques ranging from software-based recovery of corrupted data, to hardware- and software-based recovery of damaged service areas (also known as the hard disk drive's "firmware"), to hardware replacement on a physically damaged drive which allows for the extraction of data to a new drive. If a drive recovery is necessary, the drive itself has typically failed permanently, and the focus is rather on a one-time recovery, salvaging whatever data can be read.

In a third scenario, files have been accidentally "deleted" from a storage medium by the users. Typically, the contents of deleted files are not removed immediately from the physical drive; instead, references to them in the directory structure are removed, and thereafter space the deleted data occupy is made available for later data overwriting. In the mind of end users, deleted files cannot be discoverable through a standard file manager, but the deleted data still technically exists on the physical drive. In the meantime, the original file contents remain, often several disconnected fragments, and may be recoverable if not overwritten by other data files.

The term "data recovery" is also used in the context of forensic applications or espionage, where data which have been encrypted, hidden, or deleted, rather than damaged, are recovered. Sometimes data present in the computer gets encrypted or hidden due to reasons like virus attacks which can only be recovered by some computer forensic experts.

Physical damage

A wide variety of failures can cause physical damage to storage media, which may result from human errors and natural disasters. CD-ROMs can have their metallic substrate or dye layer scratched off; hard disks can suffer from a multitude of mechanical failures, such as head crashes, PCB failure, and failed motors; tapes can simply break.

Physical damage to a hard drive, even in cases where a head crash has occurred, does not necessarily mean there will be a permanent loss of data. The techniques employed by many professional data recovery companies can typically salvage most, if not all, of the data that had been lost when the failure occurred.

Of course, there are exceptions to this, such as cases where severe damage to the hard drive platters may have occurred. However, if the hard drive can be repaired and a full image or clone created, then the logical file structure can be rebuilt in most instances.

Most physical damage cannot be repaired by end users. For example, opening a hard disk drive in a normal environment can allow airborne dust to settle on the platter and become caught between the platter and the read/write head. During normal operation, read/write heads float 3 to 6 nanometers above the platter surface, and the average dust particles found in a normal environment are typically around 30,000 nanometers in diameter. When these dust particles get caught between the read/write heads and the platter, they can cause new head crashes that further damage the platter and thus compromise the recovery process. Furthermore, end users generally do not have the hardware or technical expertise required to make these repairs. Consequently, data recovery companies are often employed to salvage important data with the more reputable ones using class 100 dust- and static-free cleanrooms.

Recovery techniques

Recovering data from physically damaged hardware can involve multiple techniques. Some damage can be repaired by replacing parts in the hard disk. This alone may make the disk usable, but there may still be logical damage. A specialized disk-imaging procedure is used to recover every readable bit from the surface. Once this image is acquired and saved on a reliable medium, the image can be safely analyzed for logical damage and will possibly allow much of the original file system to be reconstructed.

Hardware repair

Media that has suffered a catastrophic electronic failure requires data recovery in order to salvage its contents.

A common misconception is that a damaged printed circuit board (PCB) may be simply replaced during recovery procedures by an identical PCB from a healthy drive. While this may work in rare circumstances on hard disk drives manufactured before 2003, it will not work on newer drives. Electronics boards of modern drives usually contain drive-specific adaptation data (generally a map of bad sectors and tuning parameters) and other information required to properly access data on the drive. Replacement boards often need this information to effectively recover all of the data. The replacement board may need to be reprogrammed. Some manufacturers (Seagate, for example) store this information on a serial EEPROM chip, which can be removed and transferred to the replacement board.

Each hard disk drive has what is called a system area or service area; this portion of the drive, which is not directly accessible to the end user, usually contains drive's firmware and adaptive data that helps the drive operate within normal parameters. One function of the system area is to log defective sectors within the drive; essentially telling the drive where it can and cannot write data.

The sector lists are also stored on various chips attached to the PCB, and they are unique to each hard disk drive. If the data on the PCB do not match what is stored on the platter, then the drive will not calibrate properly. In most cases the drive heads will click because they are unable to find the data matching what is stored on the PCB.

Logical damage

Result of a failed data recovery from a hard disk drive.

The term "logical damage" refers to situations in which the error is not a problem in the hardware and requires software-level solutions.

Corrupt partitions and file systems, media errors

In some cases, data on a hard disk drive can be unreadable due to damage to the partition table or file system, or to (intermittent) media errors. In the majority of these cases, at least a portion of the original data can be recovered by repairing the damaged partition table or file system using specialized data recovery software such as Testdisk; software like ddrescue can image media despite intermittent errors, and image raw data when there is partition table or file system damage. This type of data recovery can be performed by people without expertise in drive hardware as it requires no special physical equipment or access to platters.

Sometimes data can be recovered using relatively simple methods and tools; more serious cases can require expert intervention, particularly if parts of files are irrecoverable. Data carving is the recovery of parts of damaged files using knowledge of their structure.

Overwritten data

After data has been physically overwritten on a hard disk drive, it is generally assumed that the previous data are no longer possible to recover. In 1996, Peter Gutmann, a computer scientist, presented a paper that suggested overwritten data could be recovered through the use of magnetic force microscopy. In 2001, he presented another paper on a similar topic. To guard against this type of data recovery, Gutmann and Colin Plumb designed a method of irreversibly scrubbing data, known as the Gutmann method and used by several disk-scrubbing software packages.

Substantial criticism has followed, primarily dealing with the lack of any concrete examples of significant amounts of overwritten data being recovered. Gutmann's article contains a number of errors and inaccuracies, particularly regarding information about how data is encoded and processed on hard drives. Although Gutmann's theory may be correct, there is no practical evidence that overwritten data can be recovered, while research has shown to support that overwritten data cannot be recovered.

Solid-state drives (SSD) overwrite data differently from hard disk drives (HDD) which makes at least some of their data easier to recover. Most SSDs use flash memory to store data in pages and blocks, referenced by logical block addresses (LBA) which are managed by the flash translation layer (FTL). When the FTL modifies a sector it writes the new data to another location and updates the map so the new data appear at the target LBA. This leaves the pre-modification data in place, with possibly many generations, and recoverable by data recovery software.

Lost, deleted, and formatted data

Sometimes, data present in the physical drives (Internal/External Hard disk, Pen Drive, etc.) gets lost, deleted and formatted due to circumstances like virus attack, accidental deletion or accidental use of SHIFT+DELETE. In these cases, data recovery software is used to recover/restore the data files.

Logical bad sector

In the list of logical failures of hard disks, a logical bad sector is the most common fault leading data not to be readable. Sometimes it is possible to sidestep error detection even in software, and perhaps with repeated reading and statistical analysis recover at least some of the underlying stored data. Sometimes prior knowledge of the data stored and the error detection and correction codes can be used to recover even erroneous data. However, if the underlying physical drive is degraded badly enough, at least the hardware surrounding the data must be replaced, or it might even be necessary to apply laboratory techniques to the physical recording medium. Each of the approaches is progressively more expensive, and as such progressively more rarely sought.

Eventually, if the final, physical storage medium has indeed been disturbed badly enough, recovery will not be possible using any means; the information has irreversibly been lost.

Remote data recovery

Recovery experts do not always need to have physical access to the damaged hardware. When the lost data can be recovered by software techniques, they can often perform the recovery using remote access software over the Internet, LAN or other connection to the physical location of the damaged media. The process is essentially no different from what the end user could perform by themselves.

Remote recovery requires a stable connection with an adequate bandwidth. However, it is not applicable where access to the hardware is required, as in cases of physical damage.

Four phases of data recovery

Usually, there are four phases when it comes to successful data recovery, though that can vary depending on the type of data corruption and recovery required.

Phase 1
Repair the hard disk drive
The hard drive is repaired in order to get it running in some form, or at least in a state suitable for reading the data from it. For example, if heads are bad they need to be changed; if the PCB is faulty then it needs to be fixed or replaced; if the spindle motor is bad the platters and heads should be moved to a new drive.
Phase 2
Image the drive to a new drive or a disk image file
When a hard disk drive fails, the importance of getting the data off the drive is the top priority. The longer a faulty drive is used, the more likely further data loss is to occur. Creating an image of the drive will ensure that there is a secondary copy of the data on another device, on which it is safe to perform testing and recovery procedures without harming the source.
Phase 3
Logical recovery of files, partition, MBR and filesystem structures
After the drive has been cloned to a new drive, it is suitable to attempt the retrieval of lost data. If the drive has failed logically, there are a number of reasons for that. Using the clone it may be possible to repair the partition table or master boot record (MBR) in order to read the file system's data structure and retrieve stored data.
Phase 4
Repair damaged files that were retrieved
Data damage can be caused when, for example, a file is written to a sector on the drive that has been damaged. This is the most common cause in a failing drive, meaning that data needs to be reconstructed to become readable. Corrupted documents can be recovered by several software methods or by manually reconstructing the document using a hex editor.

Restore disk

The Windows operating system can be reinstalled on a computer that is already licensed for it. The reinstallation can be done by downloading the operating system or by using a "restore disk" provided by the computer manufacturer. Eric Lundgren was fined and sentenced to U.S. federal prison in April 2018 for producing 28,000 restore disks and intending to distribute them for about 25 cents each as a convenience to computer repair shops.

List of data recovery software

Bootable

Data recovery cannot always be done on a running system. As a result, a boot disk, live CD, live USB, or any other type of live distro contains a minimal operating system.

Consistency checkers

File recovery

Forensics

Imaging tools

  • Clonezilla: a free disk cloning, disk imaging, data recovery, and deployment boot disk
  • dd: common byte-to-byte cloning tool found on Unix-like systems
  • ddrescue: an open-source tool similar to dd but with the ability to skip over and subsequently retry bad blocks on failing storage devices
  • Team Win Recovery Project: a free and open-source recovery system for Android devices

Data sanitization

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Data_sanitization

Data sanitization involves the secure and permanent erasure of sensitive data from datasets and media to guarantee that no residual data can be recovered even through extensive forensic analysis. Data sanitization has a wide range of applications but is mainly used for clearing out end-of-life electronic devices or for the sharing and use of large datasets that contain sensitive information. The main strategies for erasing personal data from devices are physical destruction, cryptographic erasure, and data erasure. While the term data sanitization may lead some to believe that it only includes data on electronic media, the term also broadly covers physical media, such as paper copies. These data types are termed soft for electronic files and hard for physical media paper copies. Data sanitization methods are also applied for the cleaning of sensitive data, such as through heuristic-based methods, machine-learning based methods, and k-source anonymity.

This erasure is necessary as an increasing amount of data is moving to online storage, which poses a privacy risk in the situation that the device is resold to another individual. The importance of data sanitization has risen in recent years as private information is increasingly stored in an electronic format and larger, more complex datasets are being utilized to distribute private information. Electronic storage has expanded and enabled more private data to be stored. Therefore it requires more advanced and thorough data sanitization techniques to ensure that no data is left on the device once it is no longer in use. Technological tools that enable the transfer of large amounts of data also allow more private data to be shared. Especially with the increasing popularity of cloud-based information sharing and storage, data sanitization methods that ensure that all data shared is cleaned has become a significant concern. Therefore it is only sensible that governments and private industry create and enforce data sanitization policies to prevent data loss or other security incidents.

Data sanitization policy in public and private sectors

While the practice of data sanitization is common knowledge in most technical fields, it is not consistently understood across all levels of business and government. Thus, the need for a comprehensive Data Sanitization policy in government contracting and private industry is required in order to avoid the possible loss of data, leaking of state secrets to adversaries, disclosing proprietary technologies, and possibly being barred for contract competition by government agencies.

CIA Triad, By John M. Kennedy, Creative Commons Attribution-Share Alike 3.0, Wikimedia

With the increasingly connected world, it has become even more critical that governments, companies, and individuals follow specific data sanitization protocols to ensure that the confidentiality of information is sustained throughout its lifecycle. This step is critical to the core Information Security triad of Confidentiality, Integrity, and Availability. This CIA Triad is especially relevant to those who operate as government contractors or handle other sensitive private information. To this end, government contractors must follow specific data sanitization policies and use these policies to enforce the National Institute of Standards and Technology recommended guidelines for Media Sanitization covered in NIST Special Publication 800-88. This is especially prevalent for any government work which requires CUI (Controlled Unclassified Information) or above and is required by DFARS Clause 252.204-7012, Safeguarding Covered Defense Information and Cyber Incident Reporting.  While private industry may not be required to follow NIST 800-88 standards for data sanitization, it is typically considered to be a best practice across industries with sensitive data. To further compound the issue, the ongoing shortage of cyber specialists and confusion on proper cyber hygiene has created a skill and funding gap for many government contractors.

However, failure to follow these recommended sanitization policies may result in severe consequences, including losing data, leaking state secrets to adversaries, losing proprietary technologies, and preventing contract competition by government agencies. Therefore, the government contractor community must ensure its data sanitization policies are well defined and follow NIST guidelines for data sanitization. Additionally, while the core focus of data sanitization may seem to focus on electronic “soft copy” data, other data sources such as “hard copy” documents must be addressed in the same sanitization policies.

To examine the existing instances of data sanitization policies and determine the impacts of not developing, utilizing, or following these policy guidelines and recommendation, research data was not only coalesced from the government contracting sector but also other critical industries such as Defense, Energy, and Transportation. These were selected as they typically also fall under government regulations, and therefore NIST (National Institute of Standards and Technology) guidelines and policies would also apply in the United States. Primary Data is from the study performed by an independent research company Coleman Parkes Research in August 2019. This research project targeted many different senior cyber executives and policy makers while surveying over 1,800 senior stakeholders. The data from Coleman Parkes shows that 96% of organizations have a data sanitization policy in place; however, in the United States, only 62% of respondents felt that the policy is communicated well across the business. Additionally, it reveals that remote and contract workers were the least likely to comply with data sanitization policies. This trend has become a more pressing issue as many government contractors and private companies have been working remotely due to the Covid-19 pandemic. The likelihood of this continuing after the return to normal working conditions is likely.

On June 26, 2021, a basic Google search for “data lost due to non-sanitization” returned over 20 million results. These included articles on; data breaches and the loss of business, military secrets and proprietary data losses, PHI (Protected Health Information), PII (Personally Identifiable Information), and many articles on performing essential data sanitization. Many of these articles also point to existing data sanitization and security policies of companies and government entities, such as the U.S. Environmental Protection Agency, "Sample Policy and Guidance Language for Federal Media Sanitization". Based on these articles and NIST 800-88 recommendations, depending on its data security level or categorization, data should be:

  • Cleared – Provide a basic level of data sanitization by overwriting data sectors to remove any previous data remnants that a basic format would not include. Again, the focus is on electronic media. This method is typically utilized if the media is going to be re-used within the organization at a similar data security level.
  • Purged – May use physical (degaussing) or logical methods (sector overwrite) to make the target media unreadable. Typically utilized when media is no longer needed and is at a lower level of data security level.
  • Destroyed – Permanently renders the data irretrievable and is commonly used when media is leaving an organization or has reached its end of life, i.e., paper shredding or hard drive/media crushing and incineration. This method is typically utilized for media containing highly sensitive information and state secrets which could cause grave damage to national security or to the privacy and safety of individuals.

Data sanitization road blocks

The International Information Systems Security Certification Consortium 2020 Cyber Workforce study shows that the global cybersecurity industry still has over 3.12 million unfilled positions due to a skills shortage. Therefore, those with the correct skillset to implement NIST 800-88 in policies may come at a premium labor rate. In addition, staffing and funding need to adjust to meet policy needs to properly implement these sanitization methods in tandem with appropriate Data level categorization to improve data security outcomes and reduce data loss. In order to ensure the confidentiality of customer and client data, government and private industry must create and follow concrete data sanitization policies which align with best practices, such as those outlined in NIST 800-88. Without consistent and enforced policy requirements, the data will be at increased risk of compromise. To achieve this, entities must allow for a cybersecurity wage premium to attract qualified talent. In order to prevent the loss of data and therefore Proprietary Data, Personal Information, Trade Secrets, and Classified Information, it is only logical to follow best practices.

Data sanitization policy best practices

Secret-Restricted Data Cover Sheet, By Glunggenbauer, Shared under CC BY 2.0 Wikimedia

Data sanitization policy must be comprehensive and include data levels and correlating sanitization methods. Any data sanitization policy created must be comprehensive and include all forms of media to include soft and hard copy data. Categories of data should also be defined so that appropriate sanitization levels will be defined under a sanitization policy. This policy should be defined so that all levels of data can align to the appropriate sanitization method. For example, controlled unclassified information on electronic storage devices may be cleared or purged, but those devices storing secret or top secret classified materials should be physically destroyed.

Any data sanitization policy should be enforceable and show what department and management structure has the responsibility to ensure data is sanitized accordingly. This policy will require a high-level management champion (typically the Chief Information Security Officer or another C-suite equivalent) for the process and to define responsibilities and penalties for parties at all levels. This policy champion will include defining concepts such as the Information System Owner and Information Owner to define the chain of responsibility for data creation and eventual sanitization. The CISO or other policy champion should also ensure funding is allocated to additional cybersecurity workers to implement and enforce policy compliance. Auditing requirements are also typically included to prove media destruction and should be managed by these additional staff. For small business and those without a broad cyber background resources are available in the form of editable Data Sanitization policy templates. Many groups such as the IDSC (International Data Sanitization Consortium) provide these free of charge on their website https://www.datasanitization.org/.

Without training in data security and sanitization principles, it is unfeasible to expect users to comply with the policy. Therefore, the Sanitization Policy should include a matrix of instruction and frequency by job category to ensure that users, at every level, understand their part in complying with the policy. This task should be easy to accomplish as most government contractors are already required to perform annual Information Security training for all employees. Therefore, additional content can be added to ensure data sanitization policy compliance.

Sanitizing devices

The primary use of data sanitization is for the complete clearing of devices and destruction of all sensitive data once the storage device is no longer in use or is transferred to another Information system. This is an essential stage in the Data Security Lifecycle (DSL) and Information Lifecycle Management (ILM). Both are approaches for ensuring privacy and data management throughout the usage of an electronic device, as it ensures that all data is destroyed and unrecoverable when devices reach the end of their lifecycle.

There are three main methods of data sanitization for complete erasure of data: physical destruction, cryptographic erasure, and data erasure. All three erasure methods aim to ensure that deleted data cannot be accessed even through advanced forensic methods, which maintains the privacy of individuals’ data even after the mobile device is no longer in use.

Physical destruction

E-waste pending destruction and e-cycling

Physical erasure involves the manual destruction of stored data. This method uses mechanical shredders or degaussers to shred devices, such as phones, computers, hard drives, and printers, into small individual pieces. Varying levels of data security levels require different levels of destruction.

Degaussing is most commonly used on hard disk drives (HDDs), and involves the utilization of high energy magnetic fields to permanently disrupt the functionality and memory storage of the device. When data is exposed to this strong magnetic field, any memory storage is neutralized and can not be recovered or used again. Degaussing does not apply to solid state disks (SSDs) as the data is not stored using magnetic methods. When particularly sensitive data is involved it is typical to utilize processes such as paper pulp, special burn, and solid state conversion. This will ensure proper destruction of all sensitive media including paper, Hard and Soft copy media, optical media, specialized computing hardware.

Physical destruction often ensures that data is completely erased and cannot be used again. However, the physical by-products of mechanical waste from mechanical shredding can be damaging to the environment, but a recent trend in increasing the amount of e-waste material recovered by e-cycling has helped to minimize the environmental impact. Furthermore, once data is physically destroyed, it can no longer be resold or used again.

Cryptographic erasure

Cryptographic erasure involves the destruction of the secure key or passphrase, that is used to protect stored information. Data encryption involves the development of a secure key that only enables authorized parties to gain access to the data that is stored. The permanent erasure of this key ensures that the private data stored can no longer be accessed. Cryptographic erasure is commonly installed through manufactures of the device itself as encryption software is often built into the device. Encryption with key erasure involves encrypting all sensitive material in a way that requires a secure key to decrypt the information when it needs to be used. When the information needs to be deleted, the secure key can be erased. This provides a greater ease of use, and a speedier data wipe, than other software methods because it involves one deletion of secure information rather than each individual file.

Cryptographic erasure is often used for data storage that does not contain as much private information since there is a possibility that errors can occur due to manufacturing failures or human error during the process of key destruction. This creates a wider range of possible results of data erasure. This method allows for data to continue to be stored on the device and does not require that the device be completely erased. This way, the device can be resold again to another individual or company since the physical integrity of the device itself is maintained. However this assumes that the level of data encryption on the device is resistant to future encryption attacks. For instance a hard drive utilizing Cryptographic erasure with a 128bit AES key may be secure now but in 5 years, it may be common to break this level of encryption. Therefore the level of data security should be declared in a data sanitization policy to future proof the process.

Data erasure

The process of data erasure involves masking all information at the byte level through the insertion of random 0s and 1s in on all sectors of the electronic equipment that is no longer in use. This software based method ensures that all data previous stored is completely hidden and unrecoverable, which ensures full data sanitization. The efficacy and accuracy of this sanitization method can also be analyzed through auditable reports.

Data erasure often ensures complete sanitization while also maintaining the physical integrity of the electronic equipment so that the technology can be resold or reused. This ability to recycle technological devices makes data erasure a more environmentally sound version of data sanitization. This method is also the most accurate and comprehensive since the efficacy of the data masking can be tested afterwards to ensure complete deletion. However, data erasure through software based mechanisms requires more time compared to other methods.

Secure erase

A number of storage media sets support a command that, when passed to the device, causes it to perform a built-in sanitization procedure. The following command sets define such a standard command:

  • ATA (including SATA) defines a Security Erase command. Two levels of thoroughness are defined.
  • SCSI (including SAS and other physical connections) defines a SANITIZE command.
  • NVMe defines formatting with secure erase.
  • Opal Storage Specification specifies a command set for self-encrypting drives and cryptographic erase, available in addition to command-set methods.

The drive usually performs fast cryptographic erasure when data is encrypted, and a slower data erasure by overwriting otherwise. SCSI allows for asking for a specific type of erasure.

If implemented correctly, the built-in sanitization feature is sufficient to render data unrecoverable. The NIST approves of the use of this feature. There have been a few reported instances of failures to erase some or all data due to buggy firmware, sometimes readily apparent in a sector editor.

Necessity of data sanitization

There has been increased usage of mobile devices, Internet of Things (IoT) technologies, cloud-based storage systems, portable electronic devices, and various other electronic methods to store sensitive information, therefore implementing effective erasure methods once the device is not longer in use has become crucial to protect sensitive data. Due to the increased usage of electronic devices in general and the increased storage of private information on these electronic devices, the need for data sanitization has been much more urgent in recent years.

There are also specific methods of sanitization that do not fully clean devices of private data which can prove to be problematic. For example, some remote wiping methods on mobile devices are vulnerable to outside attacks and efficacy depends on the unique efficacy of each individual software system installed. Remote wiping involves sending a wireless command to the device when it has been lost or stolen that directs the device to completely wipe out all data. While this method can be very beneficial, it also has several drawbacks. For example, the remote wiping method can be manipulated by attackers to signal the process when it is not yet necessary. This results in incomplete data sanitization. If attackers do gain access to the storage on the device, the user risks exposing all private information that was stored.

Cloud computing and storage has become an increasingly popular method of data storage and transfer. However, there are certain privacy challenges associated with cloud computing that have not been fully explored. Cloud computing is vulnerable to various attacks such as through code injection, the path traversal attack, and resource depletion because of the shared pool structure of these new techniques. These cloud storage models require specific data sanitization methods to combat these issues. If data is not properly removed from cloud storage models, it opens up the possibility for security breaches at multiple levels.

Risks posed by inadequate data-set sanitization

Inadequate data sanitization methods can result in two main problems: a breach of private information and compromises to the integrity of the original dataset. If data sanitization methods are unsuccessful at removing all sensitive information, it poses the risk of leaking this information to attackers. Numerous studies have been conducted to optimize ways of preserving sensitive information. Some data sanitization methods have a high sensitivity to distinct points that have no closeness to data points. This type of data sanitization is very precise and can detect anomalies even if the poisoned data point is relatively close to true data. Another method of data sanitization is one that also removes outliers in data, but does so in a more general way. It detects the general trend of data and discards any data that strays and it’s able to target anomalies even when inserted as a group. In general, data sanitization techniques use algorithms to detect anomalies and remove any suspicious points that may be poisoned data or sensitive information.

Furthermore, data sanitization methods may remove useful, non-sensitive information, which then renders the sanitized dataset less useful and altered from the original. There have been iterations of common data sanitization techniques that attempt to correct the issue of the loss of original dataset integrity. In particular, Liu, Xuan, Wen, and Song offered a new algorithm for data sanitization called the Improved Minimum Sensitive Itemsets Conflict First Algorithm (IMSICF) method. There is often a lot of emphasis that is put into protecting the privacy of users, so this method brings a new perspective that focuses on also protecting the integrity of the data. It functions in a way that has three main advantages: it learns to optimize the process of sanitization by only cleaning the item with the highest conflict count, keeps parts of the dataset with highest utility, and also analyzes the conflict degree of the sensitive material. Robust research was conducted on the efficacy and usefulness of this new technique to reveal the ways that it can benefit in maintaining the integrity of the dataset. This new technique is able to firstly pinpoint the specific parts of the dataset that are possibly poisoned data and also use computer algorithms to make a calculation between the tradeoffs of how useful it is to decide if it should be removed. This is a new way of data sanitization that takes into account the utility of the data before it is immediately discarded.

Applications of data sanitization

Data sanitization methods are also implemented for privacy preserving data mining, association rule hiding, and blockchain-based secure information sharing. These methods involve the transfer and analysis of large datasets that contain private information. This private information needs to be sanitized before being made available online so that sensitive material is not exposed. Data sanitization is used to ensure privacy is maintained in the dataset, even when it is being analyzed.

Privacy preserving data mining

Privacy Preserving Data Mining (PPDM) is the process of data mining while maintaining privacy of sensitive material. Data mining involves analyzing large datasets to gain new information and draw conclusions. PPDM has a wide range of uses and is an integral step in the transfer or use of any large data set containing sensitive material.

Data sanitization is an integral step to privacy preserving data mining because private datasets need to be sanitized before they can be utilized by individuals or companies for analysis. The aim of privacy preserving data mining is to ensure that private information cannot be leaked or accessed by attackers and sensitive data is not traceable to individuals that have submitted the data. Privacy preserving data mining aims to maintain this level of privacy for individuals while also maintaining the integrity and functionality of the original dataset. In order for the dataset to be used, necessary aspects of the original data need to be protected during the process of data sanitization. This balance between privacy and utility has been the primary goal of data sanitization methods.

One approach to achieve this optimization of privacy and utility is through encrypting and decrypting sensitive information using a process called key generation. After the data is sanitized, key generation is used to ensure that this data is secure and cannot be tampered with. Approaches such as the Rider optimization Algorithm (ROA), also called Randomized ROA (RROA) use these key generation strategies to find the optimal key so that data can be transferred without leaking sensitive information.

Some versions of key generation have also been optimized to fit larger datasets. For example, a novel, method-based Privacy Preserving Distributed Data Mining strategy is able to increase privacy and hide sensitive material through key generation. This version of sanitization allows large amount of material to be sanitized. For companies that are seeking to share information with several different groups, this methodology may be preferred over original methods that take much longer to process.

Certain models of data sanitization delete or add information to the original database in an effort to preserve the privacy of each subject. These heuristic based algorithms are beginning to become more popularized, especially in the field of association rule mining. Heuristic methods involve specific algorithms that use pattern hiding, rule hiding, and sequence hiding to keep specific information hidden. This type of data hiding can be used to cover wide patterns in data, but is not as effective for specific information protection. Heuristic based methods are not as suited to sanitizing large datasets, however, recent developments in the heuristics based field have analyzed ways to tackle this problem. An example includes the MR-OVnTSA approach, a heuristics based sensitive pattern hiding approach for big data, introduced by Shivani Sharma and Durga Toshniwa. This approach uses a heuristics based method called the ‘MapReduce Based Optimum Victim Item and Transaction Selection Approach’, also called MR-OVnTSA, that aims to reduce the loss of important data while removing and hiding sensitive information. It takes advantage of algorithms that compare steps and optimize sanitization.

An important goal of PPDM is to strike a balance between maintaining the privacy of users that have submitted the data while also enabling developers to make full use of the dataset. Many measures of PPDM directly modify the dataset and create a new version that makes the original unrecoverable. It strictly erases any sensitive information and makes it inaccessible for attackers.

Association rule mining

One type of data sanitization is rule based PPDM, which uses defined computer algorithms to clean datasets. Association rule hiding is the process of data sanitization as applied to transactional databases. Transactional databases are the general term for data storage used to record transactions as organizations conduct their business. Examples include shipping payments, credit card payments, and sales orders. This source analyzes fifty four different methods of data sanitization and presents its four major findings of its trends

Certain new methods of data sanitization that rely on machine deep learning. There are various weaknesses in the current use of data sanitization. Many methods are not intricate or detailed enough to protect against more specific data attacks. This effort to maintain privacy while dating important data is referred to as privacy-preserving data mining. Machine learning develops methods that are more adapted to different types of attacks and can learn to face a broader range of situations. Deep learning is able to simplify the data sanitization methods and run these protective measures in a more efficient and less time consuming way.

There have also been hybrid models that utilize both rule based and machine deep learning methods to achieve a balance between the two techniques.

Blockchain-based secure information sharing

Browser backed cloud storage systems are heavily reliant on data sanitization and are becoming an increasingly popular route of data storage. Furthermore, the ease of usage is important for enterprises and workplaces that use cloud storage for communication and collaboration.

Blockchain is used to record and transfer information in a secure way and data sanitization techniques are required to ensure that this data is transferred more securely and accurately. It’s especially applicable for those working in supply chain management and may be useful for those looking to optimize the supply chain process. For example, the Whale Optimization Algorithm (WOA), uses a method of secure key generation to ensure that information is shared securely through the blockchain technique. The need to improve blockchain methods is becoming increasingly relevant as the global level of development increases and becomes more electronically dependent.

Industry specific applications

Healthcare

The healthcare industry is an important sector that relies heavily on data mining and use of datasets to store confidential information about patients. The use of electronic storage has also been increasing in recent years, which requires more comprehensive research and understanding of the risks that it may pose. Currently, data mining and storage techniques are only able to store limited amounts of information. This reduces the efficacy of data storage and increases the costs of storing data. New advanced methods of storing and mining data that involve cloud based systems are becoming increasingly popular as they are able to both mine and store larger amounts of information.

Data erasure

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Data_erasure

Data erasure (sometimes referred to as data clearing, data wiping, or data destruction) is a software-based method of data sanitization that aims to completely destroy all electronic data residing on a hard disk drive or other digital media by overwriting data onto all sectors of the device in an irreversible process. By overwriting the data on the storage device, the data is rendered irrecoverable.

Ideally, software designed for data erasure should:

  1. Allow for selection of a specific standard, based on unique needs, and
  2. Verify the overwriting method has been successful and removed data across the entire device.

Permanent data erasure goes beyond basic file deletion commands, which only remove direct pointers to the data disk sectors and make the data recovery possible with common software tools. Unlike degaussing and physical destruction, which render the storage media unusable, data erasure removes all information while leaving the disk operable. New flash memory-based media implementations, such as solid-state drives or USB flash drives, can cause data erasure techniques to fail allowing remnant data to be recoverable.

Software-based overwriting uses a software application to write a stream of zeros, ones or meaningless pseudorandom data onto all sectors of a hard disk drive. There are key differentiators between data erasure and other overwriting methods, which can leave data intact and raise the risk of data breach, identity theft or failure to achieve regulatory compliance. Many data eradication programs also provide multiple overwrites so that they support recognized government and industry standards, though a single-pass overwrite is widely considered to be sufficient for modern hard disk drives. Good software should provide verification of data removal, which is necessary for meeting certain standards.

To protect the data on lost or stolen media, some data erasure applications remotely destroy the data if the password is incorrectly entered. Data erasure tools can also target specific data on a disk for routine erasure, providing a hacking protection method that is less time-consuming than software encryption. Hardware/firmware encryption built into the drive itself or integrated controllers is a popular solution with no degradation in performance at all.

Encryption

When encryption is in place, data erasure acts as a complement to crypto-shredding, or the practice of 'deleting' data by (only) deleting or overwriting the encryption keys.

Presently, dedicated hardware/firmware encryption solutions can perform a 256-bit full AES encryption faster than the drive electronics can write the data. Drives with this capability are known as self-encrypting drives (SEDs); they are present on most modern enterprise-level laptops and are increasingly used in the enterprise to protect the data. Changing the encryption key renders inaccessible all data stored on a SED, which is an easy and very fast method for achieving a 100% data erasure. Theft of an SED results in a physical asset loss, but the stored data is inaccessible without the decryption key that is not stored on a SED, assuming there are no effective attacks against AES or its implementation in the drive hardware.

Importance

Information technology assets commonly hold large volumes of confidential data. Social security numbers, credit card numbers, bank details, medical history and classified information are often stored on computer hard drives or servers. These can inadvertently or intentionally make their way onto other media such as printers, USB, flash, Zip, Jaz, and REV drives.

Data breach

Increased storage of sensitive data, combined with rapid technological change and the shorter lifespan of IT assets, has driven the need for permanent data erasure of electronic devices as they are retired or refurbished. Also, compromised networks and laptop theft and loss, as well as that of other portable media, are increasingly common sources of data breaches.

If data erasure does not occur when a disk is retired or lost, an organization or user faces a possibility that the data will be stolen and compromised, leading to identity theft, loss of corporate reputation, threats to regulatory compliance and financial impacts. Companies spend large amounts of money to make sure their data is erased when they discard disks. High-profile incidents of data theft include:

  • CardSystems Solutions (2005-06-19): Credit card breach exposes 40 million accounts.
  • Lifeblood (2008-02-13): Missing laptops contain personal information including dates of birth and some Social Security numbers of 321,000.
  • Hannaford (2008-03-17): Breach exposes 4.2 million credit, debit cards.
  • Compass Bank (2008-03-21): Stolen hard drive contains 1,000,000 customer records.
  • University of Florida College of Medicine, Jacksonville (2008-05-20): Photographs and identifying information of 1,900 on improperly disposed computer.
  • Oklahoma Corporation Commission (2008-05-21): Server sold at auction compromises more than 5,000 Social Security numbers.
  • Department of Finance, the Australian Electoral Commission and National Disability Insurance Agency (2017-11-02) - 50,000 Australians and 5000 Federal Public servant records.

Regulatory compliance

Strict industry standards and government regulations are in place that force organizations to mitigate the risk of unauthorized exposure of confidential corporate and government data. Regulations in the United States include HIPAA (Health Insurance Portability and Accountability Act); FACTA (The Fair and Accurate Credit Transactions Act of 2003); GLB (Gramm-Leach Bliley); Sarbanes-Oxley Act (SOx); and Payment Card Industry Data Security Standards (PCI DSS) and the Data Protection Act in the United Kingdom. Failure to comply can result in fines and damage to company reputation, as well as civil and criminal liability.

Preserving assets and the environment

Data erasure offers an alternative to physical destruction and degaussing for secure removal of all the disk data. Physical destruction and degaussing destroy the digital media, requiring disposal and contributing to electronic waste while negatively impacting the carbon footprint of individuals and companies. Hard drives are nearly 100% recyclable and can be collected at no charge from a variety of hard drive recyclers after they have been sanitized.

Limitations

Data erasure may not work completely on flash based media, such as Solid State Drives and USB Flash Drives, as these devices can store remnant data which is inaccessible to the erasure technique, and data can be retrieved from the individual flash memory chips inside the device. Data erasure through overwriting only works on hard drives that are functioning and writing to all sectors. Bad sectors cannot usually be overwritten, but may contain recoverable information. Bad sectors, however, may be invisible to the host system and thus to the erasing software. Disk encryption before use prevents this problem. Software-driven data erasure could also be compromised by malicious code.

Differentiators

Software-based data erasure uses a disk accessible application to write a combination of ones, zeroes and any other alpha numeric character also known as the "mask" onto each hard disk drive sector. The level of security when using software data destruction tools is increased dramatically by pre-testing hard drives for sector abnormalities and ensuring that the drive is 100% in working order. The number of wipes has become obsolete with the more recent inclusion of a "verify pass" which scans all sectors of the disk and checks against what character should be there, i.e., one pass of AA has to fill every writable sector of the hard disk. This makes any more than one pass an unnecessary and certainly a more damaging act, especially in the case of large multi-terabyte drives.

Full disk overwriting

While there are many overwriting programs, only those capable of complete data erasure offer full security by destroying the data on all areas of a hard drive. Disk overwriting programs that cannot access the entire hard drive, including hidden/locked areas like the host protected area (HPA), device configuration overlay (DCO), and remapped sectors, perform an incomplete erasure, leaving some of the data intact. By accessing the entire hard drive, data erasure eliminates the risk of data remanence.

Data erasure can also bypass the Operating System (OS). Overwriting programs that operate through the OS will not always perform a complete erasure because they cannot modify the contents of the hard drive that are actively in use by that OS. Because of this, many data erasure programs are provided in a bootable format, where you run off a live CD that has all of the necessary software to erase the disk.

Hardware support

Data erasure can be deployed over a network to target multiple PCs rather than having to erase each one sequentially. In contrast with DOS-based overwriting programs that may not detect all network hardware, Linux-based data erasure software supports high-end server and storage area network (SAN) environments with hardware support for Serial ATA, Serial Attached SCSI (SAS) and Fibre Channel disks and remapped sectors. It operates directly with sector sizes such as 520, 524, and 528, removing the need to first reformat back to 512 sector size. WinPE has now overtaken Linux as the environment of choice since drivers can be added with little effort. This also helps with data destruction of tablets and other handheld devices that require pure UEFI environments without hardware NIC's installed and/or are lacking UEFI network stack support.

Standards

Many government and industry standards exist for software-based overwriting that removes the data. A key factor in meeting these standards is the number of times the data is overwritten. Also, some standards require a method to verify that all the data have been removed from the entire hard drive and to view the overwrite pattern. Complete data erasure should account for hidden areas, typically DCO, HPA and remapped sectors.

The 1995 edition of the National Industrial Security Program Operating Manual (DoD 5220.22-M) permitted the use of overwriting techniques to sanitize some types of media by writing all addressable locations with a character, its complement, and then a random character. This provision was removed in a 2001 change to the manual and was never permitted for Top Secret media, but it is still listed as a technique by many providers of the data erasure software.

Data erasure software should provide the user with a validation certificate indicating that the overwriting procedure was completed properly. Data erasure software should also comply with requirements to erase hidden areas, provide a defects log list and list bad sectors that could not be overwritten.

Overwriting Standard Date Overwriting Rounds Pattern Notes
U.S. Navy Staff Office Publication NAVSO P-5239-26 1993 3 A character, its complement, random Verification is mandatory
U.S. Air Force System Security Instruction 5020 1996 3 All zeros, all ones, any character Verification is mandatory
Peter Gutmann's Algorithm 1996 1 to 35 Various, including all of the other listed methods Originally intended for MFM and RLL disks, which are now obsolete
Bruce Schneier's Algorithm 1996 7 All ones, all zeros, pseudo-random sequence five times
Standard VSITR of Germany Federal Office for Information Security 1999 7 The disk is filling with sequences 0x00 and 0xFF, and on the last pass - 0xAA.
U.S. DoD Unclassified Computer Hard Drive Disposition 2001 3 A character, its complement, another pattern
German Federal Office for Information Security 2004 2 to 3 Non-uniform pattern, its complement
Communications Security Establishment Canada ITSG-06 2006 3 All ones or zeros, its complement, a pseudo-random pattern For unclassified media
NIST SP-800-88 2006 1 ?
U.S. National Industrial Security Program Operating Manual (DoD 5220.22-M) 2006 3 ? No longer specifies any method.
NSA/CSS Storage Device Declassification Manual (SDDM) 2007 0 ? Degauss or destroy only
New Zealand Government Communications Security Bureau NZSIT 402 2008 1 ? For data up to Confidential
Australian Government ICT Security Manual 2014 – Controls 2014 1 Random pattern (only for disks larger than 15 GB) Degauss magnetic media or destroy Top Secret media
NIST SP-800-88 Rev. 1 2014 1 All zeros Outlines solutions based on media type.
British HMG Infosec Standard 5, Baseline Standard ? 1 Random Pattern Verification is mandatory
British HMG Infosec Standard 5, Enhanced Standard ? 3 All ones, all zeros, random Verification is mandatory

Data can sometimes be recovered from a broken hard drive. However, if the platters on a hard drive are damaged, such as by drilling a hole through the drive (and the platters inside), then the data can only theoretically be recovered by bit-by-bit analysis of each platter with advanced forensic technology.

Number of overwrites needed

Data on floppy disks can sometimes be recovered by forensic analysis even after the disks have been overwritten once with zeros (or random zeros and ones).

This is not the case with modern hard drives:

  • According to the 2014 NIST Special Publication 800-88 Rev. 1, Section 2.4 (p. 7): "For storage devices containing magnetic media, a single overwrite pass with a fixed pattern such as binary zeros typically hinders recovery of data even if state of the art laboratory techniques are applied to attempt to retrieve the data." It recommends cryptographic erase as a more general mechanism.
  • According to the University of California, San Diego Center for Magnetic Recording Research's (now its Center for Memory and Recording Research) "Tutorial on Disk Drive Data Sanitization" (p. 8): "Secure erase does a single on-track erasure of the data on the disk drive. The U.S. National Security Agency published an Information Assurance Approval of single-pass overwrite, after technical testing at CMRR showed that multiple on-track overwrite passes gave no additional erasure." Secure erase is a feature built into modern hard drives and solid-state drives that overwrites all data on a disk, including remapped (error) sectors.
  • Further analysis by Wright et al. seems to also indicate that one overwrite is all that is generally required.

Even the possibility of recovering floppy disk data after overwrite is disputed. Gutmann's famous article cites a non-existent source and sources that do not actually demonstrate recovery, only partially-successful observations. Gutmann's article also contains many assumptions that indicate his insufficient understanding of how hard drives work, especially the data processing and encoding process. The definition of "random" is also quite different from the usual one used: Gutmann expects the use of pseudorandom data with sequences known to the recovering side, not an unpredictable one such as a cryptographically secure pseudorandom number generator.

E-waste and information security

The e-waste centre of Agbogbloshie, Ghana.

E-waste presents a potential security threat to individuals and exporting countries. Hard drives that are not properly erased before the computer is disposed of can be reopened, exposing sensitive information. Credit card numbers, private financial data, account information and records of online transactions can be accessed by most willing individuals. Organized criminals in Ghana commonly search the drives for information to use in local scams.

Government contracts have been discovered on hard drives found in Agbogbloshie.

Degaussing

From Wikipedia, the free encyclopedia

Degaussing is the process of decreasing or eliminating a remnant magnetic field. It is named after the gauss, a unit of magnetism, which in turn was named after Carl Friedrich Gauss. Due to magnetic hysteresis, it is generally not possible to reduce a magnetic field completely to zero, so degaussing typically induces a very small "known" field referred to as bias. Degaussing was originally applied to reduce ships' magnetic signatures during World War II. Degaussing is also used to reduce magnetic fields in cathode ray tube monitors and to destroy data held on magnetic storage.

Ships' hulls

USS Jimmy Carter in the magnetic silencing facility at Naval Base Kitsap for her first deperming treatment
Magnetic silencing facility at Beckoning Point, Pearl Harbor, 2012
RMS Queen Mary arriving in New York Harbor, 20 June 1945, with thousands of U.S. soldiers – note the prominent degaussing coil running around the hull
Control panel of the MES-device ("Magnetischer Eigenschutz" German: magnetic self-protection) in a German submarine

The term was first used by then-Commander Charles F. Goodeve, Royal Canadian Naval Volunteer Reserve, during World War II while trying to counter the German magnetic naval mines that were wreaking havoc on the British fleet.

Close-wrap deperming of Ivan Gren-class landing ship, 2016. The cables are floated into position before wrapping around the vessel.

The mines detected the increase in the magnetic field when the steel in a ship concentrated the Earth's magnetic field over it. Admiralty scientists, including Goodeve, developed a number of systems to induce a small "N-pole up" field into the ship to offset this effect, meaning that the net field was the same as the background. Since the Germans used the gauss as the unit of the strength of the magnetic field in their mines' triggers (not yet a standard measure), Goodeve referred to the various processes to counter the mines as "degaussing". The term became a common word.

The original method of degaussing was to install electromagnetic coils into the ships, known as coiling. In addition to being able to bias the ship continually, coiling also allowed the bias field to be reversed in the southern hemisphere, where the mines were set to detect "S-pole down" fields. British ships, notably cruisers and battleships, were well protected by about 1943.

Installing such special equipment was, however, far too expensive and difficult to service all ships that would need it, so the navy developed an alternative called wiping, which Goodeve also devised. In this procedure a large electrical cable was dragged upwards on the side of the ship, starting at the waterline, with a pulse of about 2000 amperes flowing through it. For submarines, the current came from the vessels' own propulsion batteries. This induced the proper field into the ship in the form of a slight bias. It was originally thought that the pounding of the sea and the ship's engines would slowly randomize this field, but in testing, this was found not to be a real problem. A more serious problem was later realized: as a ship travels through Earth's magnetic field, it will slowly pick up that field, counteracting the effects of the degaussing. From then on captains were instructed to change direction as often as possible to avoid this problem. Nevertheless, the bias did wear off eventually, and ships had to be degaussed on a schedule. Smaller ships continued to use wiping through the war.

To aid the Dunkirk evacuation, the British "wiped" 400 ships in four days.

During World War II, the United States Navy commissioned a specialized class of degaussing ships that were capable of performing this function. One of them, USS Deperm (ADG-10), was named after the procedure.

After the war, the capabilities of the magnetic fuzes were greatly improved, by detecting not the field itself, but changes in it. This meant a degaussed ship with a magnetic "hot spot" would still set off the mine. Additionally, the precise orientation of the field was also measured, something a simple bias field could not remove, at least for all points on the ship. A series of ever-increasingly complex coils were introduced to offset these effects, with modern systems including no fewer than three separate sets of coils to reduce the field in all axes.

Degaussing range

The effectiveness of ships' degaussing was monitored by shore-based degaussing ranges (or degaussing stations, magnetic ranges) installed beside shipping channels outside ports. The vessel under test passed at a steady speed over loops on the seabed that were monitored from buildings on the shore. The installation was used both to establish the magnetic characteristics of a hull to establish the correct value of degaussing equipment to be installed, or as a "spot check" on vessels to confirm that degaussing equipment was performing correctly. Some stations had active coils that provided magnetic treatment, offering to un-equipped ships some limited protection against future encounters with magnetic mines.

High-temperature superconductivity

The US Navy tested, in April 2009, a prototype of its High-Temperature Superconducting Degaussing Coil System, referred to as "HTS Degaussing". The system works by encircling the vessel with superconducting ceramic cables whose purpose is to neutralize the ship's magnetic signature, as in the legacy copper systems. The main advantage of the HTS Degaussing Coil system is greatly reduced weight (sometimes by as much as 80%) and increased efficiency.

A ferrous-metal-hulled ship or submarine, by its very nature, develops a magnetic signature as it travels, due to a magneto-mechanical interaction with Earth's magnetic field. It also picks up the magnetic orientation of the Earth's magnetic field where it is built. This signature can be exploited by magnetic mines or facilitate the detection of a submarine by ships or aircraft with magnetic anomaly detection (MAD) equipment. Navies use the deperming procedure, in conjunction with degaussing, as a countermeasure against this.

Specialized deperming facilities, such as the United States Navy's Lambert's Point Deperming Station at Naval Station Norfolk, or Pacific Fleet Submarine Drive-In Magnetic Silencing Facility (MSF) at Joint Base Pearl Harbor–Hickam, are used to perform the procedure. During a close-wrap magnetic treatment, heavy-gauge copper cables encircle the hull and superstructure of the vessel, and high electrical currents (up to 4000 amperes) are pulsed through the cables. This has the effect of "resetting" the ship's magnetic signature to the ambient level after flashing its hull with electricity. It is also possible to assign a specific signature that is best suited to the particular area of the world in which the ship will operate. In drive-in magnetic silencing facilities, all cables are either hung above, below and on the sides, or concealed within the structural elements of facilities. Deperming is "permanent". It is only done once unless major repairs or structural modifications are done to the ship.

Early experiments

With the introduction of iron ships, the adverse effect of the metal hull on steering compasses was noted. It was also observed that lightning strikes had a significant effect on compass deviation, identified in some extreme cases as being caused by the reversal of the ship's magnetic signature. In 1866, Evan Hopkins of London registered a patent for a process "to depolarise iron vessels and leave them thenceforth free from any compass-disturbing influence whatever". The technique was described as follows: "For this purpose he employed a number of Grove's batteries and electromagnets. The latter were to be passed along the plates till the desired end had been obtained... the process must not be overdone for fear of re-polarising in the opposite direction." The invention was, however, reported to be "incapable of being carried to a successful issue", and "quickly died a natural death".

Color cathode ray tubes

Color CRT displays, the technology underlying many television and computer monitors before the early 2010s, require degaussing. Many CRT displays use a metal plate near the front of the tube to ensure that each electron beam hits the corresponding phosphors of the correct color. If this plate becomes magnetized (e.g. if someone sweeps a magnet on the screen or places loudspeakers nearby), it imparts an undesired deflection to the electron beams and the displayed image becomes distorted and discolored.

To minimize this, CRTs have a copper or aluminum coil wrapped around the front of the display, known as the degaussing coil. Monitors without an internal coil can be degaussed using an external handheld version. Internal degaussing coils in CRTs are generally much weaker than external degaussing coils, since a better degaussing coil takes up more space. A degauss circuit induces an oscillating magnetic field with a decreasing amplitude which leaves the shadow mask with a reduced residual magnetization.

A degaussing in progress

Many televisions and monitors automatically degauss their picture tube when switched on, before an image is displayed. The high current surge that takes place during this automatic degauss is the cause of an audible "thunk", a loud hum or some clicking noises, which can be heard (and felt) when televisions and CRT computer monitors are switched on, due to the capacitors discharging and injecting current into the coil. Visually, this causes the image to shake dramatically for a short period of time. A degauss option is also usually available for manual selection in the operations menu in such appliances.

In most commercial equipment the AC current surge to the degaussing coil is regulated by a simple positive temperature coefficient (PTC) thermistor device, which initially has a low resistance, allowing a high current, but quickly changes to a high resistance, allowing minimal current, due to self-heating of the thermistor. Such devices are designed for a one-off transition from cold to hot at power up; "experimenting" with the degauss effect by repeatedly switching the device on and off may cause this component to fail. The effect will also be weaker, since the PTC will not have had time to cool off.

Magnetic data storage media

Data is stored in the magnetic media, such as hard drives, floppy disks, and magnetic tape, by making very small areas called magnetic domains change their magnetic alignment to be in the direction of an applied magnetic field. This phenomenon occurs in much the same way a compass needle points in the direction of the Earth's magnetic field. Degaussing, commonly called erasure, leaves the domains in random patterns with no preference to orientation, thereby rendering previous data unrecoverable. There are some domains whose magnetic alignment is not randomized after degaussing. The information these domains represent is commonly called magnetic remanence or remanent magnetization. Proper degaussing will ensure there is insufficient magnetic remanence to reconstruct the data.

Erasure via degaussing may be accomplished in two ways: in AC erasure, the medium is degaussed by applying an alternating field that is reduced in amplitude over time from an initial high value (i.e., AC powered); in DC erasure, the medium is saturated by applying a unidirectional field (i.e., DC powered or by employing a permanent magnet). A degausser is a device that can generate a magnetic field for degaussing magnetic storage media. The magnetic field needed for degaussing magnetic data storage media is a powerful one that normal magnets cannot easily achieve and maintain.

Irreversible damage to some media types

Many forms of generic magnetic storage media can be reused after degaussing, including reel-to-reel audio tape, VHS videocassettes, and floppy disks. These older media types are simply a raw medium which are overwritten with fresh new patterns, created by fixed-alignment read/write heads.

For certain forms of computer data storage, however, such as modern hard disk drives and some tape drives, degaussing renders the magnetic media completely unusable and damages the storage system. This is due to the devices having an infinitely variable read/write head positioning mechanism which relies on special servo control data (e.g. Gray Code) that is meant to be permanently recorded onto the magnetic media. This servo data is written onto the media a single time at the factory using special-purpose servo writing hardware.

The servo patterns are normally never overwritten by the device for any reason and are used to precisely position the read/write heads over data tracks on the media, to compensate for sudden jarring device movements, thermal expansion, or changes in orientation. Degaussing indiscriminately removes not only the stored data but also the servo control data, and without the servo data the device is no longer able to determine where data is to be read or written on the magnetic medium. The servo data must be rewritten to become usable again; with modern hard drives, this is generally not possible without manufacturer-specific and often model-specific service equipment.

Audio tape recorders

In reel-to-reel and compact cassette audio tape recorders, remnant magnetic fields will over time gather on metal parts such as guide posts tape heads. These are points that come into contact with the magnetic tape. The remnant fields can cause an increase in audible background noise during playback. Cheap, handheld consumer degaussers can significantly reduce this effect.

Types of degaussers

Degaussers range in size from small ones used in offices for erasing magnetic data storage devices to industrial-size degaussers for use on piping, ships, submarines, and other large-sized items, equipment to vehicles. Rating and categorizing degaussers depends on the strength of the magnetic field the degausser generates, the method of generating a magnetic field in the degausser, the type of operations the degausser is suitable for, the working rate of the degausser based on whether it is a high volume degausser or a low volume degausser, and mobility of the degausser among others. From these criteria of rating and categorization, there are thus electromagnetic degaussers, permanent magnet degaussers as the main types of degaussers.

Electromagnetic degaussers

An electromagnetic degausser passes an electrical charge through a degaussing coil to generate a magnetic field. Sub-types of electromagnetic degaussers are several such as Rotating Coil Degaussers and Pulse Demagnetization Technology degaussers since the technologies used in the degaussers are often developed and patented by respective manufacturing companies such as Verity Systems and Maurer Magnetic among others, so that the degausser is suitable for its intended use. Electromagnetic degaussers generate strong magnetic fields, and have a high rate of work.

Rotating coil degausser

Performance of a degaussing machine is the major determinant of the effectiveness of degaussing magnetic data storage media. Effectiveness does not improve when the media passes through the same degaussing magnetic field more than once. Rotating the media by 90 degrees improves effectiveness of degaussing the media. One magnetic media degaussers’ manufacturer, Verity Systems, has used this principle in a rotating coil technique they developed. Their rotating coil degausser passes the magnetic data storage media being erased through a magnetic field generated using two coils in the degaussing machine with the media on a variable-speed conveyor belt. The two coils generating a magnetic field are rotating; with one coil positioned above the media and the other coil positioned below the media.

Pulse degaussing

Pulse degaussing technology involves the cyclic application of electric current for a fraction of a second to the coil being used to generate a magnetic field in the degausser. The process starts with the maximum voltage applied and held for only a fraction of a second to avoid overheating the coil, and then the voltages applied in subsequent seconds are reduced in sequence at varying differences until no current is applied to the coil. Pulse degaussing saves on energy costs, produces high magnetic field strength, is suitable for degaussing large assemblies, and is reliable due to zero-error degaussing achievement.

Permanent magnet degausser

Permanent magnet degaussers use magnets made using rare earth materials. They do not require electricity for their operation. Permanent magnet degaussers require adequate shielding of the magnetic field they constantly have to prevent unintended degaussing. The need for shielding usually results in permanent magnet degaussers being bulky. When small-sized, permanent magnet degaussers are suited for use as mobile degaussers.

Shale gas

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Shale_gas...