Search This Blog

Friday, January 27, 2023

Medical image computing

From Wikipedia, the free encyclopedia

Medical image computing (MIC) is an interdisciplinary field at the intersection of computer science, information engineering, electrical engineering, physics, mathematics and medicine. This field develops computational and mathematical methods for solving problems pertaining to medical images and their use for biomedical research and clinical care.

The main goal of MIC is to extract clinically relevant information or knowledge from medical images. While closely related to the field of medical imaging, MIC focuses on the computational analysis of the images, not their acquisition. The methods can be grouped into several broad categories: image segmentation, image registration, image-based physiological modeling, and others.

Data forms

Medical image computing typically operates on uniformly sampled data with regular x-y-z spatial spacing (images in 2D and volumes in 3D, generically referred to as images). At each sample point, data is commonly represented in integral form such as signed and unsigned short (16-bit), although forms from unsigned char (8-bit) to 32-bit float are not uncommon. The particular meaning of the data at the sample point depends on modality: for example a CT acquisition collects radiodensity values, while an MRI acquisition may collect T1 or T2-weighted images. Longitudinal, time-varying acquisitions may or may not acquire images with regular time steps. Fan-like images due to modalities such as curved-array ultrasound are also common and require different representational and algorithmic techniques to process. Other data forms include sheared images due to gantry tilt during acquisition; and unstructured meshes, such as hexahedral and tetrahedral forms, which are used in advanced biomechanical analysis (e.g., tissue deformation, vascular transport, bone implants).

Segmentation

A T1 weighted MR image of the brain of a patient with a meningioma after injection of an MRI contrast agent (top left), and the same image with the result of an interactive segmentation overlaid in green (3D model of the segmentation on the top right, axial and coronal views at the bottom).

Segmentation is the process of partitioning an image into different meaningful segments. In medical imaging, these segments often correspond to different tissue classes, organs, pathologies, or other biologically relevant structures. Medical image segmentation is made difficult by low contrast, noise, and other imaging ambiguities. Although there are many computer vision techniques for image segmentation, some have been adapted specifically for medical image computing. Below is a sampling of techniques within this field; the implementation relies on the expertise that clinicians can provide.

  • Atlas-Based Segmentation: For many applications, a clinical expert can manually label several images; segmenting unseen images is a matter of extrapolating from these manually labeled training images. Methods of this style are typically referred to as atlas-based segmentation methods. Parametric atlas methods typically combine these training images into a single atlas image, while nonparametric atlas methods typically use all of the training images separately. Atlas-based methods usually require the use of image registration in order to align the atlas image or images to a new, unseen image.
  • Shape-Based Segmentation: Many methods parametrize a template shape for a given structure, often relying on control points along the boundary. The entire shape is then deformed to match a new image. Two of the most common shape-based techniques are Active Shape Models and Active Appearance Models. These methods have been very influential, and have given rise to similar models.
  • Image-Based segmentation: Some methods initiate a template and refine its shape according to the image data while minimizing integral error measures, like the Active contour model and its variations.
  • Interactive Segmentation: Interactive methods are useful when clinicians can provide some information, such as a seed region or rough outline of the region to segment. An algorithm can then iteratively refine such a segmentation, with or without guidance from the clinician. Manual segmentation, using tools such as a paint brush to explicitly define the tissue class of each pixel, remains the gold standard for many imaging applications. Recently, principles from feedback control theory have been incorporated into segmentation, which give the user much greater flexibility and allow for the automatic correction of errors.
  • Subjective surface Segmentation: This method is based on the idea of evolution of segmentation function which is governed by an advection-diffusion model. To segment an object, a segmentation seed is needed (that is the starting point that determines the approximate position of the object in the image). Consequently, an initial segmentation function is constructed. The idea behind the subjective surface method is that the position of the seed is the main factor determining the form of this segmentation function.

However, there are some other classification of image segmentation methods which are similar to above categories. Moreover, we can classify another group as "Hybrid" which is based on combination of methods.

Registration

CT image (left), PET image (center) and overlay of both (right) after correct registration.

Image registration is a process that searches for the correct alignment of images. In the simplest case, two images are aligned. Typically, one image is treated as the target image and the other is treated as a source image; the source image is transformed to match the target image. The optimization procedure updates the transformation of the source image based on a similarity value that evaluates the current quality of the alignment. This iterative procedure is repeated until a (local) optimum is found. An example is the registration of CT and PET images to combine structural and metabolic information (see figure).

Image registration is used in a variety of medical applications:

  • Studying temporal changes. Longitudinal studies acquire images over several months or years to study long-term processes, such as disease progression. Time series correspond to images acquired within the same session (seconds or minutes). They can be used to study cognitive processes, heart deformations and respiration.
  • Combining complementary information from different imaging modalities. An example is the fusion of anatomical and functional information. Since the size and shape of structures vary across modalities, it is more challenging to evaluate the alignment quality. This has led to the use of similarity measures such as mutual information.
  • Characterizing a population of subjects. In contrast to intra-subject registration, a one-to-one mapping may not exist between subjects, depending on the structural variability of the organ of interest. Inter-subject registration is required for atlas construction in computational anatomy. Here, the objective is to statistically model the anatomy of organs across subjects.
  • Computer-assisted surgery. In computer-assisted surgery pre-operative images such as CT or MRI are registered to intra-operative images or tracking systems to facilitate image guidance or navigation.

There are several important considerations when performing image registration:

  • The transformation model. Common choices are rigid, affine, and deformable transformation models. B-spline and thin plate spline models are commonly used for parameterized transformation fields. Non-parametric or dense deformation fields carry a displacement vector at every grid location; this necessitates additional regularization constraints. A specific class of deformation fields are diffeomorphisms, which are invertible transformations with a smooth inverse.
  • The similarity metric. A distance or similarity function is used to quantify the registration quality. This similarity can be calculated either on the original images or on features extracted from the images. Common similarity measures are sum of squared distances (SSD), correlation coefficient, and mutual information. The choice of similarity measure depends on whether the images are from the same modality; the acquisition noise can also play a role in this decision. For example, SSD is the optimal similarity measure for images of the same modality with Gaussian noise. However, the image statistics in ultrasound are significantly different from Gaussian noise, leading to the introduction of ultrasound specific similarity measures. Multi-modal registration requires a more sophisticated similarity measure; alternatively, a different image representation can be used, such as structural representations or registering adjacent anatomy. A recent study employed contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multi-modal Image Representations) which enabled the registration of multi-modal images where existing registration methods often fail due to a lack of sufficiently similar image structures. It reduced the multi-modal registration problem to a mono-modal one, in which general intensity based, as well as feature-based, registration algorithms can be applied.
  • The optimization procedure. Either continuous or discrete optimization is performed. For continuous optimization, gradient-based optimization techniques are applied to improve the convergence speed.

Visualization

Volume rendering (left), axial cross-section (right top), and sagittal cross-section (right bottom) of a CT image of a subject with multiple nodular lesions (white line) in the lung.

Visualization plays several key roles in Medical Image Computing. Methods from scientific visualization are used to understand and communicate about medical images, which are inherently spatial-temporal. Data visualization and data analysis are used on unstructured data forms, for example when evaluating statistical measures derived during algorithmic processing. Direct interaction with data, a key feature of the visualization process, is used to perform visual queries about data, annotate images, guide segmentation and registration processes, and control the visual representation of data (by controlling lighting rendering properties and viewing parameters). Visualization is used both for initial exploration and for conveying intermediate and final results of analyses.

The figure "Visualization of Medical Imaging" illustrates several types of visualization: 1. the display of cross-sections as gray scale images; 2. reformatted views of gray scale images (the sagittal view in this example has a different orientation than the original direction of the image acquisition; and 3. A 3D volume rendering of the same data. The nodular lesion is clearly visible in the different presentations and has been annotated with a white line.

Atlases

Medical images can vary significantly across individuals due to people having organs of different shapes and sizes. Therefore, representing medical images to account for this variability is crucial. A popular approach to represent medical images is through the use of one or more atlases. Here, an atlas refers to a specific model for a population of images with parameters that are learned from a training dataset.

The simplest example of an atlas is a mean intensity image, commonly referred to as a template. However, an atlas can also include richer information, such as local image statistics and the probability that a particular spatial location has a certain label. New medical images, which are not used during training, can be mapped to an atlas, which has been tailored to the specific application, such as segmentation and group analysis. Mapping an image to an atlas usually involves registering the image and the atlas. This deformation can be used to address variability in medical images.

Single template

The simplest approach is to model medical images as deformed versions of a single template image. For example, anatomical MRI brain scans are often mapped to the MNI template as to represent all the brain scans in common coordinates. The main drawback of a single-template approach is that if there are significant differences between the template and a given test image, then there may not be a good way to map one onto the other. For example, an anatomical MRI brain scan of a patient with severe brain abnormalities (i.e., a tumor or surgical procedure), may not easily map to the MNI template.

Multiple templates

Rather than relying on a single template, multiple templates can be used. The idea is to represent an image as a deformed version of one of the templates. For example, there could be one template for a healthy population and one template for a diseased population. However, in many applications, it is not clear how many templates are needed. A simple albeit computationally expensive way to deal with this is to have every image in a training dataset be a template image and thus every new image encountered is compared against every image in the training dataset. A more recent approach automatically finds the number of templates needed.

Statistical analysis

Statistical methods combine the medical imaging field with modern Computer Vision, Machine Learning and Pattern Recognition. Over the last decade, several large datasets have been made publicly available (see for example ADNI, 1000 functional Connectomes Project), in part due to collaboration between various institutes and research centers. This increase in data size calls for new algorithms that can mine and detect subtle changes in the images to address clinical questions. Such clinical questions are very diverse and include group analysis, imaging biomarkers, disease phenotyping and longitudinal studies.

Group analysis

In the Group Analysis, the objective is to detect and quantize abnormalities induced by a disease by comparing the images of two or more cohorts. Usually one of these cohorts consist of normal (control) subjects, and the other one consists of abnormal patients. Variation caused by the disease can manifest itself as abnormal deformation of anatomy (see Voxel-based morphometry). For example, shrinkage of sub-cortical tissues such as the Hippocampus in brain may be linked to Alzheimer's disease. Additionally, changes in biochemical (functional) activity can be observed using imaging modalities such as Positron Emission Tomography.

The comparison between groups is usually conducted on the voxel level. Hence, the most popular pre-processing pipeline, particularly in neuroimaging, transforms all of the images in a dataset to a common coordinate frame via (Medical Image Registration) in order to maintain correspondence between voxels. Given this voxel-wise correspondence, the most common Frequentist method is to extract a statistic for each voxel (for example, the mean voxel intensity for each group) and perform statistical hypothesis testing to evaluate whether a null hypothesis is or is not supported. The null hypothesis typically assumes that the two cohorts are drawn from the same distribution, and hence, should have the same statistical properties (for example, the mean values of two groups are equal for a particular voxel). Since medical images contain large numbers of voxels, the issue of multiple comparison needs to be addressed. There are also Bayesian approaches to tackle group analysis problem.

Classification

Although group analysis can quantify the general effects of a pathology on an anatomy and function, it does not provide subject level measures, and hence cannot be used as biomarkers for diagnosis (see Imaging Biomarkers). Clinicians, on the other hand, are often interested in early diagnosis of the pathology (i.e. classification) and in learning the progression of a disease (i.e. regression). From methodological point of view, current techniques varies from applying standard machine learning algorithms to medical imaging datasets (e.g. Support Vector Machine), to developing new approaches adapted for the needs of the field. The main difficulties are as follows:

  • Small sample size (Curse of Dimensionality): a large medical imaging dataset contains hundreds to thousands of images, whereas the number of voxels in a typical volumetric image can easily go beyond millions. A remedy to this problem is to reduce the number of features in an informative sense (see dimensionality reduction). Several unsupervised and semi-/supervised, approaches have been proposed to address this issue.
  • Interpretability: A good generalization accuracy is not always the primary objective, as clinicians would like to understand which parts of anatomy are affected by the disease. Therefore, interpretability of the results is very important; methods that ignore the image structure are not favored. Alternative methods based on feature selection have been proposed.

Clustering

Image-based pattern classification methods typically assume that the neurological effects of a disease are distinct and well defined. This may not always be the case. For a number of medical conditions, the patient populations are highly heterogeneous, and further categorization into sub-conditions has not been established. Additionally, some diseases (e.g., autism spectrum disorder (ASD), schizophrenia, mild cognitive impairment (MCI)) can be characterized by a continuous or nearly-continuous spectra from mild cognitive impairment to very pronounced pathological changes. To facilitate image-based analysis of heterogeneous disorders, methodological alternatives to pattern classification have been developed. These techniques borrow ideas from high-dimensional clustering  and high-dimensional pattern-regression to cluster a given population into homogeneous sub-populations. The goal is to provide a better quantitative understanding of the disease within each sub-population.

Shape analysis

Shape Analysis is the field of Medical Image Computing that studies geometrical properties of structures obtained from different imaging modalities. Shape analysis recently become of increasing interest to the medical community due to its potential to precisely locate morphological changes between different populations of structures, i.e. healthy vs pathological, female vs male, young vs elderly. Shape Analysis includes two main steps: shape correspondence and statistical analysis.

  • Shape correspondence is the methodology that computes correspondent locations between geometric shapes represented by triangle meshes, contours, point sets or volumetric images. Obviously definition of correspondence will influence directly the analysis. Among the different options for correspondence frameworks we can find: Anatomical correspondence, manual landmarks, functional correspondence (i.e. in brain morphometry locus responsible for same neuronal functionality), geometry correspondence, (for image volumes) intensity similarity, etc. Some approaches, e.g. spectral shape analysis, do not require correspondence but compare shape descriptors directly.
  • Statistical analysis will provide measurements of structural change at correspondent locations.

Longitudinal studies

In longitudinal studies the same person is imaged repeatedly. This information can be incorporated both into the image analysis, as well as into the statistical modeling.

  • In longitudinal image processing, segmentation and analysis methods of individual time points are informed and regularized with common information usually from a within-subject template. This regularization is designed to reduce measurement noise and thus helps increase sensitivity and statistical power. At the same time over-regularization needs to be avoided, so that effect sizes remain stable. Intense regularization, for example, can lead to excellent test-retest reliability, but limits the ability to detect any true changes and differences across groups. Often a trade-off needs to be aimed for, that optimizes noise reduction at the cost of limited effect size loss. Another common challenge in longitudinal image processing is the, often unintentional, introduction of processing bias. When, for example, follow-up images get registered and resampled to the baseline image, interpolation artifacts get introduced to only the follow-up images and not the baseline. These artifact can cause spurious effects (usually a bias towards overestimating longitudinal change and thus underestimating required sample size). It is therefore essential that all-time points get treated exactly the same to avoid any processing bias.
  • Post-processing and statistical analysis of longitudinal data usually requires dedicated statistical tools such as repeated measure ANOVA or the more powerful linear mixed effects models. Additionally, it is advantageous to consider the spatial distribution of the signal. For example, cortical thickness measurements will show a correlation within-subject across time and also within a neighborhood on the cortical surface - a fact that can be used to increase statistical power. Furthermore, time-to-event (aka survival) analysis is frequently employed to analyze longitudinal data and determine significant predictors.

Image-based physiological modelling

Traditionally, medical image computing has seen to address the quantification and fusion of structural or functional information available at the point and time of image acquisition. In this regard, it can be seen as quantitative sensing of the underlying anatomical, physical or physiological processes. However, over the last few years, there has been a growing interest in the predictive assessment of disease or therapy course. Image-based modelling, be it of biomechanical or physiological nature, can therefore extend the possibilities of image computing from a descriptive to a predictive angle.

According to the STEP research roadmap, the Virtual Physiological Human (VPH) is a methodological and technological framework that, once established, will enable the investigation of the human body as a single complex system. Underlying the VPH concept, the International Union for Physiological Sciences (IUPS) has been sponsoring the IUPS Physiome Project for more than a decade. This is a worldwide public domain effort to provide a computational framework for understanding human physiology. It aims at developing integrative models at all levels of biological organization, from genes to the whole organisms via gene regulatory networks, protein pathways, integrative cell functions, and tissue and whole organ structure/function relations. Such an approach aims at transforming current practice in medicine and underpins a new era of computational medicine.

In this context, medical imaging and image computing play an increasingly important role as they provide systems and methods to image, quantify and fuse both structural and functional information about the human being in vivo. These two broad research areas include the transformation of generic computational models to represent specific subjects, thus paving the way for personalized computational models. Individualization of generic computational models through imaging can be realized in three complementary directions:

  • definition of the subject-specific computational domain (anatomy) and related subdomains (tissue types);
  • definition of boundary and initial conditions from (dynamic and/or functional) imaging; and
  • characterization of structural and functional tissue properties.

In addition, imaging also plays a pivotal role in the evaluation and validation of such models both in humans and in animal models, and in the translation of models to the clinical setting with both diagnostic and therapeutic applications. In this specific context, molecular, biological, and pre-clinical imaging render additional data and understanding of basic structure and function in molecules, cells, tissues and animal models that may be transferred to human physiology where appropriate.

The applications of image-based VPH/Physiome models in basic and clinical domains are vast. Broadly speaking, they promise to become new virtual imaging techniques. Effectively more, often non-observable, parameters will be imaged in silico based on the integration of observable but sometimes sparse and inconsistent multimodal images and physiological measurements. Computational models will serve to engender interpretation of the measurements in a way compliant with the underlying biophysical, biochemical or biological laws of the physiological or pathophysiological processes under investigation. Ultimately, such investigative tools and systems will help our understanding of disease processes, the natural history of disease evolution, and the influence on the course of a disease of pharmacological and/or interventional therapeutic procedures.

Cross-fertilization between imaging and modelling goes beyond interpretation of measurements in a way consistent with physiology. Image-based patient-specific modelling, combined with models of medical devices and pharmacological therapies, opens the way to predictive imaging whereby one will be able to understand, plan and optimize such interventions in silico.

Mathematical methods in medical imaging

A number of sophisticated mathematical methods have entered medical imaging, and have already been implemented in various software packages. These include approaches based on partial differential equations (PDEs) and curvature driven flows for enhancement, segmentation, and registration. Since they employ PDEs, the methods are amenable to parallelization and implementation on GPGPUs. A number of these techniques have been inspired from ideas in optimal control. Accordingly, very recently ideas from control have recently made their way into interactive methods, especially segmentation. Moreover, because of noise and the need for statistical estimation techniques for more dynamically changing imagery, the Kalman filter and particle filter have come into use. A survey of these methods with an extensive list of references may be found in.

Modality specific computing

Some imaging modalities provide very specialized information. The resulting images cannot be treated as regular scalar images and give rise to new sub-areas of Medical Image Computing. Examples include diffusion MRI, functional MRI and others.

Diffusion MRI

A mid-axial slice of the ICBM diffusion tensor image template. Each voxel's value is a tensor represented here by an ellipsoid. Color denotes principal orientation: red = left-right, blue=inferior-superior, green = posterior-anterior

Diffusion MRI is a structural magnetic resonance imaging modality that allows measurement of the diffusion process of molecules. Diffusion is measured by applying a gradient pulse to a magnetic field along a particular direction. In a typical acquisition, a set of uniformly distributed gradient directions is used to create a set of diffusion weighted volumes. In addition, an unweighted volume is acquired under the same magnetic field without application of a gradient pulse. As each acquisition is associated with multiple volumes, diffusion MRI has created a variety of unique challenges in medical image computing.

In medicine, there are two major computational goals in diffusion MRI:

  • Estimation of local tissue properties, such as diffusivity;
  • Estimation of local directions and global pathways of diffusion.

The diffusion tensor, a 3 × 3 symmetric positive-definite matrix, offers a straightforward solution to both of these goals. It is proportional to the covariance matrix of a Normally distributed local diffusion profile and, thus, the dominant eigenvector of this matrix is the principal direction of local diffusion. Due to the simplicity of this model, a maximum likelihood estimate of the diffusion tensor can be found by simply solving a system of linear equations at each location independently. However, as the volume is assumed to contain contiguous tissue fibers, it may be preferable to estimate the volume of diffusion tensors in its entirety by imposing regularity conditions on the underlying field of tensors. Scalar values can be extracted from the diffusion tensor, such as the fractional anisotropy, mean, axial and radial diffusivities, which indirectly measure tissue properties such as the dysmyelination of axonal fibers or the presence of edema. Standard scalar image computing methods, such as registration and segmentation, can be applied directly to volumes of such scalar values. However, to fully exploit the information in the diffusion tensor, these methods have been adapted to account for tensor valued volumes when performing registration and segmentation.

Given the principal direction of diffusion at each location in the volume, it is possible to estimate the global pathways of diffusion through a process known as tractography. However, due to the relatively low resolution of diffusion MRI, many of these pathways may cross, kiss or fan at a single location. In this situation, the single principal direction of the diffusion tensor is not an appropriate model for the local diffusion distribution. The most common solution to this problem is to estimate multiple directions of local diffusion using more complex models. These include mixtures of diffusion tensors, Q-ball imaging, diffusion spectrum imaging and fiber orientation distribution functions, which typically require HARDI acquisition with a large number of gradient directions. As with the diffusion tensor, volumes valued with these complex models require special treatment when applying image computing methods, such as registration and segmentation.

Functional MRI

Functional magnetic resonance imaging (fMRI) is a medical imaging modality that indirectly measures neural activity by observing the local hemodynamics, or blood oxygen level dependent signal (BOLD). fMRI data offers a range of insights, and can be roughly divided into two categories:

  • Task related fMRI is acquired as the subject is performing a sequence of timed experimental conditions. In block-design experiments, the conditions are present for short periods of time (e.g., 10 seconds) and are alternated with periods of rest. Event-related experiments rely on a random sequence of stimuli and use a single time point to denote each condition. The standard approach to analyze task related fMRI is the general linear model (GLM) 
  • Resting state fMRI is acquired in the absence of any experimental task. Typically, the objective is to study the intrinsic network structure of the brain. Observations made during rest have also been linked to specific cognitive processes such as encoding or reflection. Most studies of resting state fMRI focus on low frequency fluctuations of the fMRI signal (LF-BOLD). Seminal discoveries include the default network, a comprehensive cortical parcellation, and the linking of network characteristics to behavioral parameters.

There is a rich set of methodology used to analyze functional neuroimaging data, and there is often no consensus regarding the best method. Instead, researchers approach each problem independently and select a suitable model/algorithm. In this context there is a relatively active exchange among neuroscience, computational biology, statistics, and machine learning communities. Prominent approaches include

  • Massive univariate approaches that probe individual voxels in the imaging data for a relationship to the experiment condition. The prime approach is the general linear model (GLM) 
  • Multivariate- and classifier based approaches, often referred to as multi voxel pattern analysis or multi-variate pattern analysis probe the data for global and potentially distributed responses to an experimental condition. Early approaches used support vector machines (SVM) to study responses to visual stimuli. Recently, alternative pattern recognition algorithms have been explored, such as random forest based gini contrast or sparse regression and dictionary learning 
  • Functional connectivity analysis studies the intrinsic network structure of the brain, including the interactions between regions. The majority of such studies focus on resting state data to parcelate the brain  or to find correlates to behavioral measures. Task specific data can be used to study causal relationships among brain regions (e.g., dynamic causal mapping (DCM)).

When working with large cohorts of subjects, the normalization (registration) of individual subjects into a common reference frame is crucial. A body of work and tools exist to perform normalization based on anatomy (FSL, FreeSurfer, SPM). Alignment taking spatial variability across subjects into account is a more recent line of work. Examples are the alignment of the cortex based on fMRI signal correlation, the alignment based on the global functional connectivity structure both in task-, or resting state data, and the alignment based on stimulus specific activation profiles of individual voxels.

Software

Software for medical image computing is a complex combination of systems providing IO, visualization and interaction, user interface, data management and computation. Typically system architectures are layered to serve algorithm developers, application developers, and users. The bottom layers are often libraries and/or toolkits which provide base computational capabilities; while the top layers are specialized applications which address specific medical problems, diseases, or body systems.

Additional notes

Medical Image Computing is also related to the field of Computer Vision. An international society, The MICCAI Society represents the field and organizes an annual conference and associated workshops. Proceedings for this conference are published by Springer in the Lecture Notes in Computer Science series. In 2000, N. Ayache and J. Duncan reviewed the state of the field.

Medical algorithm

From Wikipedia, the free encyclopedia
A medical algorithm for assessment and treatment of overweight and obesity.

A medical algorithm is any computation, formula, statistical survey, nomogram, or look-up table, useful in healthcare. Medical algorithms include decision tree approaches to healthcare treatment (e.g., if symptoms A, B, and C are evident, then use treatment X) and also less clear-cut tools aimed at reducing or defining uncertainty. A medical prescription is also a type of medical algorithm.

Scope

Medical algorithms are part of a broader field which is usually fit under the aims of medical informatics and medical decision-making. Medical decisions occur in several areas of medical activity including medical test selection, diagnosis, therapy and prognosis, and automatic control of medical equipment.

In relation to logic-based and artificial neural network-based clinical decision support systems, which are also computer applications used in the medical decision-making field, algorithms are less complex in architecture, data structure and user interface. Medical algorithms are not necessarily implemented using digital computers. In fact, many of them can be represented on paper, in the form of diagrams, nomographs, etc.

Examples

A wealth of medical information exists in the form of published medical algorithms. These algorithms range from simple calculations to complex outcome predictions. Most clinicians use only a small subset routinely.

Examples of medical algorithms are:

A common class of algorithms are embedded in guidelines on the choice of treatments produced by many national, state, financial and local healthcare organisations and provided as knowledge resources for day to day use and for induction of new physicians. A field which has gained particular attention is the choice of medications for psychiatric conditions. In the United Kingdom, guidelines or algorithms for this have been produced by most of the circa 500 primary care trusts, substantially all of the circa 100 secondary care psychiatric units and many of the circa 10 000 general practices. In the US, there is a national (federal) initiative to provide them for all states, and by 2005 six states were adapting the approach of the Texas Medication Algorithm Project or otherwise working on their production.

A grammar—the Arden syntax—exists for describing algorithms in terms of medical logic modules. An approach such as this should allow exchange of MLMs between doctors and establishments, and enrichment of the common stock of tools.

Purpose

The intended purpose of medical algorithms is to improve and standardize decisions made in the delivery of medical care. Medical algorithms assist in standardizing selection and application of treatment regimens, with algorithm automation intended to reduce potential introduction of errors. Some attempt to predict the outcome, for example critical care scoring systems.

Computerized health diagnostics algorithms can provide timely clinical decision support, improve adherence to evidence-based guidelines, and be a resource for education and research.

Medical algorithms based on best practice can assist everyone involved in delivery of standardized treatment via a wide range of clinical care providers. Many are presented as protocols and it is a key task in training to ensure people step outside the protocol when necessary. In our present state of knowledge, generating hints and producing guidelines may be less satisfying to the authors, but more appropriate.

Cautions

In common with most science and medicine, algorithms whose contents are not wholly available for scrutiny and open to improvement should be regarded with suspicion.

Computations obtained from medical algorithms should be compared with, and tempered by, clinical knowledge and physician judgment.

Clinical decision support system

A clinical decision support system (CDSS) is a health information technology, provides clinicians, staff, patients, or other individuals with knowledge and person-specific information, to help health and health care. CDSS encompasses a variety of tools to enhance decision-making in the clinical workflow. These tools include computerized alerts and reminders to care providers and patients, clinical guidelines, condition-specific order sets;´, focused patient data reports and summaries, documentation templates, diagnostic support, and contextually relevant reference information, among other tools. Robert Hayward of the Centre has proposed a working definition for Health Evidence: "Clinical decision support systems link health observations with health knowledge to influence health choices by clinicians for improved health care". CDSSs constitute a major topic in artificial intelligence in medicine.

Characteristics

A clinical decision support system is an active knowledge system that uses variables of patient data to produce advice regarding health care. This implies that a CDSS is simply a decision support system that is focused on using knowledge management.

Purpose

The main purpose of modern CDSS is to assist clinicians at the point of care. This means that clinicians interact with a CDSS to help to analyse and reach a diagnosis based on patient data for different diseases.

In the early days, CDSSs were conceived to make decisions for the clinician literally. The clinician would input the information and wait for the CDSS to output the "right" choice, and the clinician would simply act on that output. However, the modern methodology of using CDSSs to assist means that the clinician interacts with the CDSS, utilising both their knowledge and the CDSS's, better to analyse the patient's data than either human or CDSS could make on their own. Typically, a CDSS makes suggestions for the clinician to look through, and the clinician is expected to pick out useful information from the presented results and discount erroneous CDSS suggestions.

The two main types of CDSS are knowledge-based and non-knowledge-based:

An example of how a clinical decision support system might be used by a clinician is a diagnosis decision support system (DDSS). DDSS requests some of the patients' data and, in response, proposes a set of appropriate diagnoses. The physician then takes the output of the DDSS and determines which diagnoses might be relevant and which are not, and, if necessary, orders further tests to narrow down the diagnosis.

Another example of a CDSS would be a case-based reasoning (CBR) system. A CBR system might use previous case data to help determine the appropriate amount of beams and the optimal beam angles for use in radiotherapy for brain cancer patients; medical physicists and oncologists would then review the recommended treatment plan to determine its viability.

Another important classification of a CDSS is based on the timing of its use. Physicians use these systems at the point of care to help them as they are dealing with a patient, with the timing of use being either pre-diagnosis, during diagnosis, or post-diagnosis. Pre-diagnosis CDSS systems help the physician prepare the diagnoses. CDSSs help review and filter the physician's preliminary diagnostic choices to improve outcomes. Post-diagnosis CDSS systems are used to mine data to derive connections between patients and their past medical history and clinical research to predict future events. As of 2012, it has been claimed that decision support will begin to replace clinicians in common tasks in the future.

Another approach, used by the National Health Service in England, is to use a DDSS to triage medical conditions out of hours by suggesting a suitable next step to the patient (e.g. call an ambulance, or see a general practitioner on the next working day). The suggestion, which may be disregarded by either the patient or the phone operative if common sense or caution suggests otherwise, is based on the known information and an implicit conclusion about what the worst-case diagnosis is likely to be; it is not always revealed to the patient because it might well be incorrect and is not based on a medically-trained person's opinion - it is only used for initial triage purposes.

Knowledge-based CDSS

Most CDSSs consist of three parts: the knowledge base, an inference engine, and a mechanism to communicate. The knowledge base contains the rules and associations of compiled data which most often take the form of IF-THEN rules. If this was a system for determining drug interactions, then a rule might be that IF drug X is taken AND drug Y is taken THEN alert the user. Using another interface, an advanced user could edit the knowledge base to keep it up to date with new drugs. The inference engine combines the rules from the knowledge base with the patient's data. The communication mechanism allows the system to show the results to the user as well as have input into the system.

An expression language such as GELLO or CQL (Clinical Quality Language) is needed for expressing knowledge artefacts in a computable manner. For example: if a patient has diabetes mellitus, and if the last haemoglobin A1c test result was less than 7%, recommend re-testing if it has been over six months, but if the last test result was greater than or equal to 7%, then recommend re-testing if it has been over three months.

The current focus of the HL7 CDS WG is to build on the Clinical Quality Language (CQL). The U.S. Centers for Medicare & Medicaid Services (CMS) has announced that it plans to use CQL for the specification of Electronic Clinical Quality Measures (eCQMs).

Non-knowledge-based CDSS

CDSSs which do not use a knowledge base use a form of artificial intelligence called machine learning, which allow computers to learn from past experiences and/or find patterns in clinical data. This eliminates the need for writing rules and expert input. However, since systems based on machine learning cannot explain the reasons for their conclusions, most clinicians do not use them directly for diagnoses, reliability and accountability reasons. Nevertheless, they can be useful as post-diagnostic systems, for suggesting patterns for clinicians to look into in more depth.

As of 2012, three types of non-knowledge-based systems are support-vector machines, artificial neural networks and genetic algorithms.

  1. Artificial neural networks use nodes and weighted connections between them to analyse the patterns found in patient data to derive associations between symptoms and a diagnosis.
  2. Genetic algorithms are based on simplified evolutionary processes using directed selection to achieve optimal CDSS results. The selection algorithms evaluate components of random sets of solutions to a problem. The solutions that come out on top are then recombined and mutated and run through the process again. This happens over and over until the proper solution is discovered. They are functionally similar to neural networks in that they are also "black boxes" that attempt to derive knowledge from patient data.
  3. Non-knowledge-based networks often focus on a narrow list of symptoms, such as symptoms for a single disease, as opposed to the knowledge-based approach, which covers the diagnosis of many diseases.

An example of a non-knowledge-based CDSS is a web server developed using a support vector machine for the prediction of gestational diabetes in Ireland. 

Regulations

United States

With the enactment of the American Recovery and Reinvestment Act of 2009 (ARRA), there is a push for widespread adoption of health information technology through the Health Information Technology for Economic and Clinical Health Act (HITECH). Through these initiatives, more hospitals and clinics are integrating electronic medical records (EMRs) and computerized physician order entry (CPOE) within their health information processing and storage. Consequently, the Institute of Medicine (IOM) promoted the usage of health information technology, including clinical decision support systems, to advance the quality of patient care. The IOM had published a report in 1999, To Err is Human, which focused on the patient safety crisis in the United States, pointing to the incredibly high number of deaths. This statistic attracted great attention to the quality of patient care.

With the enactment of the HITECH Act included in the ARRA, encouraging the adoption of health IT, more detailed case laws for CDSS and EMRs are still being defined by the Office of National Coordinator for Health Information Technology (ONC) and approved by Department of Health and Human Services (HHS). A definition of "Meaningful use" is yet to be published.

Despite the absence of laws, the CDSS vendors would almost certainly be viewed as having a legal duty of care to both the patients who may adversely be affected due to CDSS usage and the clinicians who may use the technology for patient care. However, duties of care legal regulations are not explicitly defined yet.

With recent effective legislations related to performance shift payment incentives, CDSS are becoming more attractive.

Effectiveness

The evidence of the effectiveness of CDSS is mixed. There are certain diseases which benefit more from CDSS than other disease entities. A 2018 systematic review identified six medical conditions in which CDSS improved patient outcomes in hospital settings, including blood glucose management, blood transfusion management, physiologic deterioration prevention, pressure ulcer prevention, acute kidney injury prevention, and venous thromboembolism prophylaxis. A 2014 systematic review did not find a benefit in terms of risk of death when the CDSS was combined with the electronic health record. There may be some benefits, however, in terms of other outcomes. A 2005 systematic review had concluded that CDSSs improved practitioner performance in 64% of the studies and patient outcomes in 13% of the studies. CDSSs features associated with improved practitioner performance included automatic electronic prompts rather than requiring user activation of the system.

A 2005 systematic review found... "Decision support systems significantly improved clinical practice in 68% of trials." The CDSS features associated with success included integration into the clinical workflow rather than as a separate log-in or screen., electronic rather than paper-based templates, providing decision support at the time and location of care rather than prior and providing care recommendations.

However, later systematic reviews were less optimistic about the effects of CDS, with one from 2011 stating "There is a large gap between the postulated and empirically demonstrated benefits of [CDSS and other] eHealth technologies ... their cost-effectiveness has yet to be demonstrated".

A 5-year evaluation of the effectiveness of a CDSS in implementing rational treatment of bacterial infections was published in 2014; according to the authors, it was the first long-term study of a CDSS.

Challenges to adoption

Clinical challenges

Much effort has been put forth by many medical institutions and software companies to produce viable CDSSs to support all aspects of clinical tasks. However, with the complexity of clinical workflows and the demands on staff time high, care must be taken by the institution deploying the support system to ensure that the system becomes an integral part of the clinical workflow. Some CDSSs have met with varying amounts of success, while others have suffered from common problems preventing or reducing successful adoption and acceptance.

Two sectors of the healthcare domain in which CDSSs have had a large impact are the pharmacy and billing sectors. Commonly used pharmacy and prescription-ordering systems now perform batch-based checking orders for negative drug interactions and report warnings to the ordering professional. Another sector of success for CDSS is in billing and claims filing. Since many hospitals rely on Medicare reimbursements to stay in operation, systems have been created to help examine both a proposed treatment plan and the current rules of Medicare to suggest a plan that attempts to address both the care of the patient and the financial needs of the institution.

Other CDSSs that are aimed at diagnostic tasks have found success, but are often very limited in deployment and scope. The Leeds Abdominal Pain System went operational in 1971 for the University of Leeds hospital. It was reported to have produced a correct diagnosis in 91.8% of cases, compared to the clinicians' success rate of 79.6%.

Despite the wide range of efforts by institutions to produce and use these systems, widespread adoption and acceptance have still not yet been achieved for most offerings. One large roadblock to acceptance has historically been workflow integration. A tendency to focus only on the functional decision-making core of the CDSS existed, causing a deficiency in planning how the clinician will use the product in situ. CDSSs were stand-alone applications, requiring the clinician to cease working on their current system, switch to the CDSS, input the necessary data (even if it had already been inputted into another system), and examine the results produced. The additional steps break the flow from the clinician's perspective and cost precious time.

Technical challenges and barriers to implementation

Clinical decision support systems face steep technical challenges in a number of areas. Biological systems are profoundly complicated, and a clinical decision may utilise an enormous range of potentially relevant data. For example, an electronic evidence-based medicine system may potentially consider a patient's symptoms, medical history, family history and genetics, as well as historical and geographical trends of disease occurrence, and published clinical data on therapeutic effectiveness when recommending a patient's course of treatment.

Clinically, a large deterrent to CDSS acceptance is workflow integration.

While it has been shown that clinicians require explanations of Machine Learning-Based CDSS, in order to able to understand and trust their suggestions, there is an overall distinct lack of application of explainable Artificial Intelligence in the context of CDSS, thus adding another barrier to the adoption of these systems.

Another source of contention with many medical support systems is that they produce a massive number of alerts. When systems produce a high volume of warnings (especially those that do not require escalation), besides the annoyance, clinicians may pay less attention to warnings, causing potentially critical alerts to be missed. This phenomenon is called alert fatigue. 

Maintenance

One of the core challenges facing CDSS is difficulty in incorporating the extensive quantity of clinical research being published on an ongoing basis. In a given year, tens of thousands of clinical trials are published. Currently, each one of these studies must be manually read, evaluated for scientific legitimacy, and incorporated into the CDSS in an accurate way. In 2004, it was stated that the process of gathering clinical data and medical knowledge and putting them into a form that computers can manipulate to assist in clinical decision-support is "still in its infancy".

Nevertheless, it is more feasible for a business to do this centrally, even if incompletely, than for each doctor to try to keep up with all the research being published.

In addition to being laborious, integration of new data can sometimes be difficult to quantify or incorporate into the existing decision support schema, particularly in instances where different clinical papers may appear conflicting. Properly resolving these sorts of discrepancies is often the subject of clinical papers itself (see meta-analysis), which often take months to complete.

Evaluation

In order for a CDSS to offer value, it must demonstrably improve clinical workflow or outcome. Evaluation of CDSS quantifies its value to improve a system's quality and measure its effectiveness. Because different CDSSs serve different purposes, no generic metric applies to all such systems; however, attributes such as consistency (with and with experts) often apply across a wide spectrum of systems.

The evaluation benchmark for a CDSS depends on the system's goal: for example, a diagnostic decision support system may be rated based upon the consistency and accuracy of its classification of disease (as compared to physicians or other decision support systems). An evidence-based medicine system might be rated based upon a high incidence of patient improvement or higher financial reimbursement for care providers.

Combining with electronic health records

Implementing EHRs was an inevitable challenge. This challenge is because it is a relatively uncharted area, and there are many issues and complications during the implementation phase of an EHR. This can be seen in the numerous studies that have been undertaken. However, challenges in implementing electronic health records (EHRs) have received some attention. Still, less is known about transitioning from legacy EHRs to newer systems.

EHRs are a way to capture and utilise real-time data to provide high-quality patient care, ensuring efficiency and effective use of time and resources. Incorporating EHR and CDSS together into the process of medicine has the potential to change the way medicine has been taught and practiced. It has been said that "the highest level of EHR is a CDSS".

Since "clinical decision support systems (CDSS) are computer systems designed to impact clinician decision making about individual patients at the point in time that these decisions are made", it is clear that it would be beneficial to have a fully integrated CDSS and EHR.

Even though the benefits can be seen, fully implementing a CDSS integrated with an EHR has historically required significant planning by the healthcare facility/organisation for the CDSS to be successful and effective. The success and effectiveness can be measured by the increased patient care being delivered and reduced adverse events occurring. In addition, there would be a saving of time and resources and benefits in terms of autonomy and financial benefits to the healthcare facility/organisation.

Benefits of CDSS combined with EHR

A successful CDSS/EHR integration will allow the provision of best practice, high-quality care to the patient, which is the ultimate goal of healthcare.

Errors have always occurred in healthcare, so trying to minimise them as much as possible is important to provide quality patient care. Three areas that can be addressed with the implementation of CDSS and Electronic Health Records (EHRs), are:

  1. Medication prescription errors
  2. Adverse drug events
  3. Other medical errors

CDSSs will be most beneficial in the future when healthcare facilities are "100% electronic" in terms of real-time patient information, thus simplifying the number of modifications that have to occur to ensure that all the systems are up to date with each other.

The measurable benefits of clinical decision support systems on physician performance and patient outcomes remain the subject of ongoing research.

Barriers

Implementing electronic health records (EHR) in healthcare settings incurs challenges; none more important than maintaining efficiency and safety during rollout, but in order for the implementation process to be effective, an understanding of the EHR users' perspectives is key to the success of EHR implementation projects. In addition to this, adoption needs to be actively fostered through a bottom-up, clinical-needs-first approach. The same can be said for CDSS.

As of 2007, the main areas of concern with moving into a fully integrated EHR/CDSS system have been:

  1. Privacy
  2. Confidentiality
  3. User-friendliness
  4. Document accuracy and completeness
  5. Integration
  6. Uniformity
  7. Acceptance
  8. Alert desensitisation

as well as the key aspects of data entry that need to be addressed when implementing a CDSS to avoid potential adverse events from occurring. These aspects include whether:

  • correct data is being used
  • all the data has been entered into the system
  • current best practice is being followed
  • the data is evidence-based

A service oriented architecture has been proposed as a technical means to address some of these barriers.

Status in Australia

As of July 2015, the planned transition to EHRs in Australia is facing difficulties. Most healthcare facilities are still running completely paper-based systems; some are in a transition phase of scanned EHRs or moving towards such a transition phase.

Victoria has attempted to implement EHR across the state with its HealthSMART program, but it has cancelled the project due to unexpectedly high costs.

South Australia (SA) however is slightly more successful than Victoria in the implementation of an EHR. This may be because all public healthcare organisations in SA are centrally run.

(However, on the other hand, the UK's National Health Service is also centrally administered, and its National Programme for IT in the 2000s, which included EHRs in its remit, was an expensive disaster.)

SA is in the process of implementing "Enterprise patient administration system (EPAS)". This system is the foundation for all public hospitals and health care sites for an EHR within SA, and it was expected that by the end of 2014, all facilities in SA will be connected to it. This would allow for successful integration of CDSS into SA and increase the benefits of the EHR. By July 2015 it was reported that only 3 out of 75 health care facilities implemented EPAS.

With the largest health system in the country and a federated rather than a centrally administered model, New South Wales is making consistent progress towards statewide implementation of EHRs. The current iteration of the state's technology, eMR2, includes CDSS features such as a sepsis pathway for identifying at-risk patients based upon data input to the electronic record. As of June 2016, 93 of 194 sites in-scope for the initial roll-out had implemented eMR2

Status in Finland

The EBMEDS Clinical Decision Support service provided by Duodecim Medical Publications Ltd is used by more than 60% of Finnish public health care doctors.

Research

Prescription errors

A study in the UK tested the Salford Medication Safety Dashboard (SMASH), a web-based CDSS application to help GPs and pharmacists find people in their electronic health records who might face safety hazards due to prescription errors. The dashboard was successfully used in identifying and helping patients with already registered unsafe prescriptions and later it helped monitoring new cases as they appeared.

Legal expert system

From Wikipedia, the free encyclopedia

A legal expert system is a domain-specific expert system that uses artificial intelligence to emulate the decision-making abilities of a human expert in the field of law. Legal expert systems employ a rule base or knowledge base and an inference engine to accumulate, reference and produce expert knowledge on specific subjects within the legal domain.

Purpose

It has been suggested that legal expert systems could help to manage the rapid expansion of legal information and decisions that began to intensify in the late 1960s. Many of the first legal expert systems were created in the 1970s and 1980s.

Lawyers were originally identified as primary target users of legal expert systems. Potential motivations for this work included:

  • speedier delivery of legal advice;
  • reduced time spent in repetitive, labour intensive legal tasks;
  • development of knowledge management techniques that were not dependent on staff;
  • reduced overhead and labour costs and higher profitability for law firms; and
  • reduced fees for clients.

Some early development work was oriented toward the creation of automated judges.

Later work on legal expert systems has identified potential benefits to non-lawyers as a means to increase access to legal knowledge.

Legal expert systems can also support administrative processes, facilitating decision making processes, automating rule-based analyses and exchanging information directly with citizen-users.

Types

Architectural variations

Rule-based expert systems rely on a model of deductive reasoning that utilizes "if A, then B" rules. In a rule-based legal expert system, information is represented in the form of deductive rules within the knowledge base.

Case-based reasoning models, which store and manipulate examples or cases, hold the potential to emulate an analogical reasoning process thought to be well-suited for the legal domain. This model effectively draws on known experiences our outcomes for similar problems.

A neural net relies on a computer model that mimics that structure of a human brain, and operates in a very similar way to the case-based reasoning model. This expert system model is capable of recognizing and classifying patterns within the realm of legal knowledge and dealing with imprecise inputs.

Fuzzy logic models attempt to create 'fuzzy' concepts or objects that can then be converted into quantitative terms or rules that are indexed and retrieved by the system. In the legal domain, fuzzy logic can be used for rule-based and case-based reasoning models.

Theoretical variations

While some legal expert system architects have adopted a very practical approach, employing scientific modes of reasoning within a given set of rules or cases, others have opted for a broader philosophical approach inspired by jurisprudential reasoning modes emanating from established legal theoreticians.

Functional variations

Some legal expert systems aim to arrive at a particular conclusion in law, while others are designed to predict a particular outcome. An example of a predictive system is one that predicts the outcome of judicial decisions, the value of a case, or the outcome of litigation.

Reception

Many forms of legal expert systems have become widely used and accepted by both the legal community and the users of legal services.

Challenges

Domain-related problems

The inherent complexity of law as a discipline raises immediate challenges for legal expert system knowledge engineers. Legal matters often involve interrelated facts and issues, which further compound the complexity.

Factual uncertainty may also arise when there are disputed versions of factual representations that must be input into an expert system to begin the reasoning process.

Computerized problem solving

The limitations of most computerized problem solving techniques inhibit the success of many expert systems in the legal domain. Expert systems typically rely on deductive reasoning models that have difficulty according degrees of weight to certain principles of law or importance to previously decided cases that may or may not influence a decision in an immediate case or context.

Representation of legal knowledge

Expert legal knowledge can be difficult to represent or formalize within the structure of an expert system. For knowledge engineers, challenges include:

  • Open texture: Law is rarely applied in an exact way to specific facts, and exact outcomes are rarely a certainty. Statutes may be interpreted according to different linguistic interpretations, reliance on precedent cases or other contextual factors including a particular judge's conception of fairness.
  • The balancing of reasons: Many arguments involve considerations or reasons that are not easily represented in a logical way. For instance, many constitutional legal issues are said to balance independently well-established considerations for state interests against individual rights. Such balancing may draw on extra-legal considerations that would be difficult to represent logically in an expert system.
  • Indeterminacy of legal reasoning: In the adversarial arena of law, it is common to have two strong arguments on a single point. Determining the 'right' answer may depend on a majority vote among expert judges, as in the case of an appeal.

Time and cost effectiveness

Creating a functioning expert system requires significant investments in software architecture, subject matter expertise and knowledge engineering. Faced with these challenges, many system architects restrict the domain in terms of subject matter and jurisdiction. The consequence of this approach is the creation of narrowly focused and geographically restricted legal expert systems that are difficult to justify on a cost-benefit basis.

Current applications of AI in the legal field utilize machines to review documents, particularly when a high level of completeness and confidence in the quality of document analysis is depended upon, such as in instances of litigation and where due diligence play a role. Among the numerically most quantifiable advantages of AI in the legal field are the time and money saving impact by freeing lawyers from having to spend inordinate amounts of their valuable time on routine tasks, aiding in setting free lawyers’ creative energy by reducing stress. This in turn increases the rate of case load reduction by accomplishing better results in less time, which unlocks potential additional revenue per unit of time spend on a case. The cost of setting up and maintaining AI systems in law is more than offset by the attained savings through increased efficacy; unbalanced cost can be assigned to clients.

Lack of correctness in results or decisions

Legal expert systems may lead non-expert users to incorrect or inaccurate results and decisions. This problem could be compounded by the fact that users may rely heavily on the correctness or trustworthiness of results or decisions generated by these systems.

Examples

ASHSD-II is a hybrid legal expert system that blends rule-based and case-based reasoning models in the area of matrimonial property disputes under English law.

CHIRON is a hybrid legal expert system that blends rule-based and case-based reasoning models to support tax planning activities under United States tax law and codes.

JUDGE is a rule-based legal expert system that deals with sentencing in the criminal legal domain for offences relating to murder, assault and manslaughter.

Legislate is a knowledge graph powered contract management platform which applies legal rules to generate lawyer-approved contracts.

The Latent Damage Project is a rule-based legal expert system that deals with limitation periods under the (UK) Latent Damage Act 1986 in relation to the domains of tort, contract and product liability law.

Split-Up is a rule-based legal expert system that assists in the division of marital assets according to the (Australia) Family Law Act (1975).

SHYSTER is a case-based legal expert system that can also function as a hybrid through its ability to link with rule-based models. It was designed to accommodate multiple legal domains, including aspects of Australian copyright law, contract law, personal property and administrative law.

TAXMAN is a rule-based system that could perform a basic form of legal reasoning by classifying cases under a particular category of statutory rules in the area of law concerning corporate reorganization.

Controversies

There may be a lack of consensus over what distinguishes a legal expert system from a knowledge-based system (also called an intelligent knowledge-based system). While legal expert systems are held to function at the level of a human legal expert, knowledge-based systems may depend on the ongoing assistance of a human expert. True legal expert systems typically focus on a narrow domain of expertise as opposed to a wider and less specific domain as in the case of most knowledge-based systems.

Legal expert systems represent potentially disruptive technologies for the traditional, bespoke delivery of legal services. Accordingly, established legal practitioners may consider them a threat to historical business practices.

Arguments have been made that a failure to take into consideration various theoretical approaches to legal decision making will produce expert systems that fail to reflect the true nature of decision making. Meanwhile, some legal expert system architects contend that because many lawyers have proficient legal reasoning skills without a sound base in legal theory, the same should hold true for legal expert systems.

Because legal expert systems apply precision and scientific rigor to the act of legal decision-making, they may be seen as a challenge to the more disorganized and less precise dynamics of traditional jurisprudential modes of legal reasoning. Some commentators also contend that the true nature of legal practice does not necessarily depend on analyses of legal rules or principles; decisions are based instead on an expectation of what a human adjudicator would decide for a given case.

Recent developments

Since 2013, there have been significant developments in legal expert systems. Professor Tanina Rostain of Georgetown Law Center teaches a course in designing legal expert systems. Open-source platforms like Docassemble and companies such as Neota Logic and Checkbox have begun to offer artificial intelligence and machine learning-based legal expert systems.

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...