Search This Blog

Sunday, March 1, 2026

Topological deep learning

From Wikipedia, the free encyclopedia

Topological deep learning (TDL) is a research field that extends deep learning to handle complex, non-Euclidean data structures. Traditional deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), excel in processing data on regular grids and sequences. However, scientific and real-world data often exhibit more intricate data domains encountered in scientific computations, including point clouds, meshes, time series, scalar fields graphs, or general topological spaces like simplicial complexes and CW complexes. TDL addresses this by incorporating topological concepts to process data with higher-order relationships, such as interactions among multiple entities and complex hierarchies. This approach leverages structures like simplicial complexes and hypergraphs to capture global dependencies and qualitative spatial properties, offering a more nuanced representation of data. TDL also encompasses methods from computational and algebraic topology that permit studying properties of neural networks and their training process, such as their predictive performance or generalization properties. The mathematical foundations of TDL are algebraic topology, differential topology, and geometric topology. Therefore, TDL can be generalized for data on differentiable manifolds, knots, links, tangles, curves, etc.

History and motivation

Traditional techniques from deep learning often operate under the assumption that a dataset is residing in a highly-structured space (like images, where convolutional neural networks exhibit outstanding performance over alternative methods) or a Euclidean space. The prevalence of new types of data, in particular graphs, meshes, and molecules, resulted in the development of new techniques, culminating in the field of geometric deep learning, which originally proposed a signal-processing perspective for treating such data types. While originally confined to graphs, where connectivity is defined based on nodes and edges, follow-up work extended concepts to a larger variety of data types, including simplicial complexes and CW complexes, with recent work proposing a unified perspective of message-passing on general combinatorial complexes.

An independent perspective on different types of data originated from topological data analysis, which proposed a new framework for describing structural information of data, i.e., their "shape," that is inherently aware of multiple scales in data, ranging from local information to global information. While at first restricted to smaller datasets, subsequent work developed new descriptors that efficiently summarized topological information of datasets to make them available for traditional machine-learning techniques, such as support vector machines or random forests. Such descriptors ranged from new techniques for feature engineering over new ways of providing suitable coordinates for topological descriptors, or the creation of more efficient dissimilarity measures.

Contemporary research in this field is largely concerned with either integrating information about the underlying data topology into existing deep-learning models or obtaining novel ways of training on topological domains.

Learning on topological spaces

Learning Tasks on topological domains can be broadly classified into three categories: cell classification, cell prediction and complex classification.

Focusing on topology in the sense of point set topology, an active branch of TDL is concerned with learning on topological spaces, that is, on different topological domains.

An introduction to topological domains

One of the core concepts in topological deep learning is the domain upon which this data is defined and supported. In case of Euclidean data, such as images, this domain is a grid, upon which the pixel value of the image is supported. In a more general setting this domain might be a topological domain. Next, we introduce the most common topological domains that are encountered in a deep learning setting. These domains include, but not limited to, graphs, simplicial complexes, cell complexes, combinatorial complexes and hypergraphs.

Given a finite set S of abstract entities, a neighborhood function on S is an assignment that attach to every point in S a subset of S or a relation. Such a function can be induced by equipping S with an auxiliary structure. Edges provide one way of defining relations among the entities of S. More specifically, edges in a graph allow one to define the notion of neighborhood using, for instance, the one hop neighborhood notion. Edges however, limited in their modeling capacity as they can only be used to model binary relations among entities of S since every edge is connected typically to two entities. In many applications, it is desirable to permit relations that incorporate more than two entities. The idea of using relations that involve more than two entities is central to topological domains. Such higher-order relations allow for a broader range of neighborhood functions to be defined on S to capture multi-way interactions among entities of S.

Next we review the main properties, advantages, and disadvantages of some commonly studied topological domains in the context of deep learning, including (abstract) simplicial complexes, regular cell complexes, hypergraphs, and combinatorial complexes.

(a): A group S is made up of basic parts (vertices) without any connections.(b): A graph represents simple connections between its parts (vertices) that are elements of S.(c): A simplicial complex shows a way parts (relations) are connected to each other, but with strict rules about how they're connected.(d): Like simplicial complexes, a cell complex shows how parts (relations) are connected, but it's more flexible in how they're shaped (like 'cells').(f): A hypergraph shows any kind of connections between parts of S, but these connections aren't organized in any particular order.(e): A CC mixes elements from cell complexes (connections with order) and hypergraphs (varied connections), covering both kinds of setups.

Comparisons among topological domains

Each of the enumerated topological domains has its own characteristics, advantages, and limitations:

  • Simplicial complexes
    • Simplest form of higher-order domains.
    • Extensions of graph-based models.
    • Admit hierarchical structures, making them suitable for various applications.
    • Hodge theory can be naturally defined on simplicial complexes.
    • Require relations to be subsets of larger relations, imposing constraints on the structure.
  • Cell Complexes
    • Generalize simplicial complexes.
    • Provide more flexibility in defining higher-order relations.
    • Each cell in a cell complex is homeomorphic to an open ball, attached together via attaching maps.
    • Boundary cells of each cell in a cell complex are also cells in the complex.
    • Represented combinatorially via incidence matrices.
  • Hypergraphs
    • Allow arbitrary set-type relations among entities.
    • Relations are not imposed by other relations, providing more flexibility.
    • Do not explicitly encode the dimension of cells or relations.
    • Useful when relations in the data do not adhere to constraints imposed by other models like simplicial and cell complexes.
  • Combinatorial Complexes :
    • Generalize and bridge the gaps between simplicial complexes, cell complexes, and hypergraphs.
    • Allow for hierarchical structures and set-type relations.
    • Combine features of other complexes while providing more flexibility in modeling relations.
    • Can be represented combinatorially, similar to cell complexes.

Hierarchical structure and set-type relations

The properties of simplicial complexes, cell complexes, and hypergraphs give rise to two main features of relations on higher-order domains, namely hierarchies of relations and set-type relations.

Rank function

A rank function on a higher-order domain X is an order-preserving function rk: XZ, where rk(x) attaches a non-negative integer value to each relation x in X, preserving set inclusion in X. Cell and simplicial complexes are common examples of higher-order domains equipped with rank functions and therefore with hierarchies of relations.

Set-type relations

Relations in a higher-order domain are called set-type relations if the existence of a relation is not implied by another relation in the domain. Hypergraphs constitute examples of higher-order domains equipped with set-type relations. Given the modeling limitations of simplicial complexes, cell complexes, and hypergraphs, we develop the combinatorial complex, a higher-order domain that features both hierarchies of relations and set-type relations.

The learning tasks in TDL can be broadly classified into three categories:

  • Cell classification: Predict targets for each cell in a complex. Examples include triangular mesh segmentation, where the task is to predict the class of each face or edge in a given mesh.
  • Complex classification: Predict targets for an entire complex. For example, predict the class of each input mesh.
  • Cell prediction: Predict properties of cell-cell interactions in a complex, and in some cases, predict whether a cell exists in the complex. An example is the prediction of linkages among entities in hyperedges of a hypergraph.

In practice, to perform the aforementioned tasks, deep learning models designed for specific topological spaces must be constructed and implemented. These models, known as topological neural networks, are tailored to operate effectively within these spaces.

Topological neural networks

Central to TDL are topological neural networks (TNNs), specialized architectures designed to operate on data structured in topological domains. Unlike traditional neural networks tailored for grid-like structures, TNNs are adept at handling more intricate data representations, such as graphs, simplicial complexes, and cell complexes. By harnessing the inherent topology of the data, TNNs can capture both local and global relationships, enabling nuanced analysis and interpretation.

Message passing topological neural networks

In a general topological domain, higher-order message passing involves exchanging messages among entities and cells using a set of neighborhood functions.

Definition: Higher-Order Message Passing on a General Topological Domain

Higher order message passing is a deep learning model defined on a topological domain and relies on message passing information among entities in the underlying domain in order to perform a learning task.

Let be a topological domain. We define a set of neighborhood functions on . Consider a cell and let for some . A message between cells and is a computation dependent on these two cells or the data supported on them. Denote as the multi-set , and let represent some data supported on cell at layer . Higher-order message passing on , induced by , is defined by the following four update rules:

  1. , where is the intra-neighborhood aggregation function.
  2. , where is the inter-neighborhood aggregation function.
  3. , where are differentiable functions.

Some remarks on Definition above are as follows.

First, Equation 1 describes how messages are computed between cells and . The message is influenced by both the data and associated with cells and , respectively. Additionally, it incorporates characteristics specific to the cells themselves, such as orientation in the case of cell complexes. This allows for a richer representation of spatial relationships compared to traditional graph-based message passing frameworks.

Second, Equation 2 defines how messages from neighboring cells are aggregated within each neighborhood. The function aggregates these messages, allowing information to be exchanged effectively between adjacent cells within the same neighborhood.

Third, Equation 3 outlines the process of combining messages from different neighborhoods. The function aggregates messages across various neighborhoods, facilitating communication between cells that may not be directly connected but share common neighborhood relationships.

Fourth, Equation 4 specifies how the aggregated messages influence the state of a cell in the next layer. Here, the function updates the state of cell based on its current state and the aggregated message obtained from neighboring cells.

Non-message passing topological neural networks

While the majority of TNNs follow the message passing paradigm from graph learning, several models have been suggested that do not follow this approach. For instance, Maggs et al. leverage geometric information from embedded simplicial complexes, i.e., simplicial complexes with high-dimensional features attached to their vertices.This offers interpretability and geometric consistency without relying on message passing. Furthermore, in  a contrastive loss-based method was suggested to learn the simplicial representation.

Learning on topological descriptors

Motivated by the modular nature of deep neural networks, initial work in TDL drew inspiration from topological data analysis, and aimed to make the resulting descriptors amenable to integration into deep-learning models. This led to work defining new layers for deep neural networks. Pioneering work by Hofer et al., for instance, introduced a layer that permitted topological descriptors like persistence diagrams or persistence barcodes to be integrated into a deep neural network. This was achieved by means of end-to-end-trainable projection functions, permitting topological features to be used to solve shape classification tasks, for instance. Follow-up work expanded more on the theoretical properties of such descriptors and integrated them into the field of representation learning. Other such topological layers include layers based on extended persistent homology descriptors, persistence landscapes, or coordinate functions. In parallel, persistent homology also found applications in graph-learning tasks. Noteworthy examples include new algorithms for learning task-specific filtration functions for graph classification or node classification tasks.

Applications

TDL is rapidly finding new applications across different domains, including data compression, enhancing the expressivity and predictive performance of graph neural networks, action recognition, and trajectory prediction.

Differential equation

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Differential_equation

In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common in mathematical models and scientific laws; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.

The study of differential equations consists mainly of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly.

Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers, and many numerical methods have been developed to determine solutions with a given degree of accuracy. The theory of dynamical systems analyzes the qualitative aspects of solutions, such as their average behavior over a long time interval.

History

Differential equations came into existence with the invention of calculus by Isaac Newton and Gottfried Leibniz. In Chapter 2 of his 1671 work Methodus fluxionum et Serierum Infinitarum, Newton listed three kinds of differential equations:

In all these cases, y is an unknown function of x (or of x1 and x2), and f is a given function. He solved these examples and others using infinite series and discussed the non-uniqueness of solutions.

Jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is an ordinary differential equation of the form

for which the following year Leibniz obtained solutions by simplifying it.

Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange. In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation.

The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics.

In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat), in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now a common part of mathematical physics curriculum.

Example

In classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time.

In some cases, this differential equation (called an equation of motion) may be solved explicitly.

An example of modeling a real-world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the deceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity.

Types

Differential equations can be classified several different ways. Besides describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts.

Ordinary differential equations

An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a dependent variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.

As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer.

Partial differential equations

A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or using a relevant computer model.

PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. Stochastic partial differential equations generalize partial differential equations for modeling randomness.

Linear differential equations

Linear differential equations are differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and in many cases one may express their solutions in terms of integrals.

Many differential equations that are encountered in physics are linear, for example ODEs describing radioactive decay and PDEs for heat transfer by thermal diffusion. These lead to special functions, which may be defined as solutions of linear differential equations (see Holonomic function).

Non-linear differential equations

A non-linear differential equation is a differential equation that is not a linear equation in the unknown function and its derivatives (the linearity or non-linearity in the arguments of the function are not considered here). There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic of chaos. Even the fundamental questions of existence and uniqueness of solutions for nonlinear differential equations are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.

In some circumstances, nonlinear differential equations may be approximated by linear ones. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations. Similarly, when a fixed point or stationary solution of a nonlinear differential equation has been found, investigation of its stability leads to a linear differential equation.

Equation order and degree

The order of the differential equation is the highest order of derivative of the unknown function that appears in the differential equation. For example, an equation containing only first-order derivatives is a first-order differential equation, an equation containing the second-order derivative is a second-order differential equation, and so on.

When it is written as a polynomial equation in the unknown function and its derivatives, the degree of the differential equation is, depending on the context, the polynomial degree in the highest derivative of the unknown function, or its total degree in the unknown function and its derivatives. In particular, a linear differential equation has degree one for both meanings, but the non-linear differential equation is of degree one for the first meaning but not for the second one.

Differential equations that describe natural phenomena usually have only first and second order derivatives in them, but there are some exceptions, such as the thin-film equation, which is a fourth order partial differential equation.

Homogeneous linear equations

A linear differential equation is homogeneous if each term in the equation includes either the dependent variable or one of its derivatives. If this is not the case, so that there is a term that does not include either the dependent variable itself or a derivative of it, the equation is inhomogeneous or heterogeneous. See the examples section below.

Examples

The first group of examples are ordinary differential equations, where u is an unknown function of x, and c and ω are constants that are assumed to be known. These examples illustrate the distinction between linear and nonlinear differential equations, and between homogeneous differential equations and inhomogeneous ones, defined above.

  • Inhomogeneous first-order linear constant-coefficient ordinary differential equation:
  • Homogeneous second-order linear ordinary differential equation:
  • Homogeneous second-order linear constant-coefficient ordinary differential equation describing the harmonic oscillator:
  • First-order nonlinear ordinary differential equation:
  • Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:

The next group of examples are partial differential equations. The unknown function u depends on two variables x and t or x and y.

  • Homogeneous first-order linear partial differential equation:
  • Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:
  • Third-order non-linear partial differential equation, the KdV equation:

Initial conditions and boundary conditions

The general solution of a first-order ordinary differential equation includes a constant, which can be thought of as a constant of integration. Similarly, the general solution of a th order ODE contains constants.

To determine the values of these constants, additional conditions must be provided. If the independent variable corresponds to time, this information takes the form of initial conditions. For example, for a second-order ODE describing the motion of a particle, the initial conditions would typically be the position and velocity of the particle at the initial time. The ODE and its initial conditions form what is known as an initial value problem.

For the case of a spatial independent variable, these conditions are generally known as boundary conditions. These are often specified at different values of the independent variable. Examples include the motion of a vibrating string that is fixed at two endpoints. In this case the ODE and boundary conditions lead to a boundary value problem.

More generally, the term initial conditions is normally used when the conditions are given at the same value of the independent variable, and the term boundary conditions is used when they are specified at different values of the independent variable. In either case, the number of initial or boundary conditions should match the order of the differential equation.

Existence of solutions

For a given differential equation, the questions of whether solutions are unique or exist at all are notable subjects of interest.

For a first-order initial value problem, the Peano existence theorem gives one set of circumstances in which a solution exists. Given any point in the xy-plane, define some rectangular region , such that and is in the interior of . If we are given a differential equation and the condition that when , then there is locally a solution to this problem if is continuous on . This solution exists on some interval with its center at . The solution may not be unique. (See Ordinary differential equation for other results.)

However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order:

such that

For any nonzero , if and are continuous on some interval containing , exists and is unique.

Connection to difference equations

Differential equations are closely related to difference equations, in which the independent variable assumes only discrete values, and the equation relates the value of the unknown function at a point to its values at nearby points. Many numerical methods for differential equations, for example the Euler method, involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation.

Applications

The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics is concerned with finding solutions, either directly or approximately, and studying their behaviour. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not have closed form solutions. Instead, solutions can be approximated using numerical methods.

Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. When this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation.

The number of differential equations that have received a name, in various scientific areas, demonstrates the importance of the topic. See List of named differential equations.

Neurotechnology

From Wikipedia, the free encyclopedia

Neurotechnology encompasses any method or electronic device which interfaces with the nervous system to monitor or modulate neural activity.

Common design goals for neurotechnologies include using neural activity readings to control external devices such as neuroprosthetics, altering neural activity via neuromodulation to repair or normalize function affected by neurological disorders, or augmenting cognitive abilities. In addition to their therapeutic or commercial uses, neurotechnologies also constitute powerful research tools to advance fundamental neuroscience knowledge.

Some examples of neurotechnologies include deep brain stimulation, photostimulation based on optogenetics and photopharmacology, transcranial magnetic stimulation, transcranial electric stimulation and brain–computer interfaces, such as cochlear implants and retinal implants.

The field of neurotechnology has been around for nearly half a century but has only reached maturity in the last twenty years. Decoding basic procedures and interactions within the brain's neuronal activity is essential to integrate machines with the nervous system. This is one of the central steps of the technological revolution based on a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres. Integrating an electronic device with the nervous system enables monitoring and modulating neural activity as well as managing implemented machines by mental activity. Further work in this direction would have profound implications for improving existing and developing new treatments for neurological disorders and advanced "implantable neurotechnologies" as integrated artificial implants for various pieces of the nervous system. Advances in these efforts are associated with developing models based on knowledge about natural processes in bio-systems that monitor and/or modulate neural activity. One promising direction evolves through studying the mother-fetus neurocognitive model. According to this model, the innate natural mechanism ensures the embryonic nervous system's correct (balanced) development. Because the mother-fetus interaction enables the child's nervous system to evolve with adequate biological sentience, similar environmental conditions can treat the injured nervous system. This means that the physiological processes of this natural neurostimulation during gestation underlie any noninvasive artificial neuromodulation technique. This knowledge paves the way for designing and precise tuning noninvasive brain stimulation devices in treating different nervous system diseases within the scope of modulating neural activity.

More specialized sectors of the neurotechnology development for monitoring and modulating neural activity are aimed at creating powerful concepts as "neuron-like electrodes", "biohybrid electrodes", "planar complementary metal-oxide semiconductor systems", "injectable bioconjugate nanomaterials", "implantable optoelectronic microchips".

The advent of brain imaging revolutionized the field, allowing researchers to directly monitor the brain's activities during experiments. Practice in neurotechnology can be found in fields such as pharmaceutical practices, be it from drugs for depression, sleep, ADHD, or anti-neurotics to cancer scanning, stroke rehabilitation, etc.

Many in the field aim to control and harness more of what the brain does and how it influences lifestyles and personalities. Commonplace technologies already attempt to do this; games like BrainAge, and programs like Fast ForWord that aim to improve brain function, are neurotechnologies.

Currently, modern science can image nearly all aspects of the brain as well as control a degree of the function of the brain. It can help control depression, over-activation, sleep deprivation, and many other conditions. Therapeutically it can help improve stroke patients' motor coordination, improve brain function, reduce epileptic episodes (see epilepsy), improve patients with degenerative motor diseases (Parkinson's disease, Huntington's disease, ALS), and can even help alleviate phantom pain perception. Advances in the field promise many new enhancements and rehabilitation methods for patients with neurological problems. The neurotechnology revolution has given rise to the Decade of the Mind initiative, which was started in 2007. It also offers the possibility of revealing the mechanisms by which mind and consciousness emerge from the brain.

Types

Neurostimulation

A wide range of neurostimulation techniques can be divided into four domains depending on the use of energy stimulation: acoustic wave energy, electrical energy, electromagnetic radiation, and magnetic energy. Some of these techniques are presented below:

Deep brain stimulation

Deep brain stimulation is currently used in patients with movement disorders to improve the quality of life in patients.

Transcranial ultrasound stimulation

Transcrancial ultrasound stimulation (TUS) is a technique using ultrasound to modulate neural activity in the brain. It is an emerging technique that has shown therapeutic promise in a variety of neurological diseases.

Transcranial magnetic stimulation

Transcranial magnetic stimulation (TMS) is a technique for applying magnetic fields to the brain to manipulate electrical activity at specific loci in the brain. This field of study is currently receiving a large amount of attention due to the potential benefits that could come out of better understanding this technology. Transcranial magnetic movement of particles in the brain shows promise for drug targeting and delivery as studies have demonstrated this to be noninvasive on brain physiology.

Transcranial magnetic stimulation is a relatively new method of studying how the brain functions and is used in many research labs focused on behavioral disorders, epilepsy, PTSD, migraine, hallucinations, and other disorders. Currently, repetitive transcranial magnetic stimulation is being researched to see if positive behavioral effects of TMS can be made more permanent. Some techniques combine TMS and another scanning method such as EEG to get additional information about brain activity such as cortical response.

Transcranial direct current stimulation

Transcranial direct current stimulation (TDCS) is a form of neurostimulation which uses constant, low current delivered via electrodes placed on the scalp. The mechanisms underlying TDCS effects are still incompletely understood, but recent advances in neurotechnology allowing for in vivo assessment of brain electric activity during TDCS promise to advance understanding of these mechanisms. Research into using TDCS on healthy adults have demonstrated that TDCS can increase cognitive performance on a variety of tasks, depending on the area of the brain being stimulated. TDCS has been used to enhance language and mathematical ability (though one form of TDCS was also found to inhibit math learning), attention span, problem solving, memory, coordination and relieve depression and chronic fatigue.

Electrophysiology

Electroencephalography (EEG) is a method of measuring brainwave activity non-invasively. A number of electrodes are placed around the head and scalp and electrical signals are measured. Clinically, EEGs are used to study epilepsy as well as stroke and tumor presence in the brain. Electrocorticography (ECoG) relies on similar principles but requires invasive implantation of electrodes on the brain's surface to measure local field potentials or action potentials more sensitively.

Magnetoencephalography (MEG) is another method of measuring activity in the brain by measuring the magnetic fields that arise from electrical currents in the brain. The benefit to using MEG instead of EEG is that these fields are highly localized and give rise to better understanding of how specific loci react to stimulation or if these regions over-activate (as in epileptic seizures).

There are potential uses for EEG and MEG such as charting rehabilitation and improvement after trauma as well as testing neural conductivity in specific regions of epileptics or patients with personality disorders. EEG has been fundamental in understanding the resting brain during sleep. Real-time EEG has been considered for use in lie detection. Similarly, real-time fMRI is being researched as a method for pain therapy by altering how people perceive pain if they are made aware of how their brain is functioning while in pain. By providing direct and understandable feedback, researchers can help patients with chronic pain decrease their symptoms.

Implants

Neurotechnological implants can be used to record and utilize brain activity to control other devices which provide feedback to the user or replace missing biological functions. The most common neurodevices available for clinical use are deep brain stimulators implanted in the subthalamic nucleus for patients with Parkinson's disease.

Pharmaceuticals

Pharmaceuticals play a vital role in maintaining stable brain chemistry, and are the most commonly used neurotechnology by the general public and medicine. Drugs like sertraline, methylphenidate, and zolpidem act as chemical modulators in the brain, and they allow for normal activity in many people whose brains cannot act normally under physiological conditions. While pharmaceuticals are usually not mentioned and have their own field, the role of pharmaceuticals is perhaps the most far-reaching and commonplace in modern society. Movement of magnetic particles to targeted brain regions for drug delivery is an emerging field of study and causes no detectable circuit damage.

Ethical considerations

Like other disruptive innovations, neurotechnologies have the potential for profound social and legal repercussions, and as such their development and introduction to society raise a series of ethical questions.

Key concerns include the preservation of identity, agency, cognitive liberty and privacy as neurorights. While experts agree that these core features of the human experience stand to benefit from the ethical use of neurotechnology, they also make a point of emphasizing the importance of preventively establishing specific regulatory frameworks and other mechanisms that protect against inappropriate or unauthorized uses.

Identity

Identity in this context refers to personal continuity, described as bodily and mental integrity and their persistence over time. In other words, it is the individual's self-narrative and concept of self.

While disruption of identity is not a common goal for neurotechnologies, some techniques can create unwanted shifts that range in severity. For instance, deep brain stimulation is commonly used as treatment for Parkinson's disease but can have side effects that touch on the concept of identity, such as loss of voice modulation, increased impulsivity or feelings of self-estrangement. In the case of neural prostheses and brain-computer interfaces, the shift may take the form of an extension of one's sense of self, potentially incorporating the device as an integral part of oneself or expanding the range of sensory and cognitive channels available to the user beyond the traditional senses.

Part of the difficulty in determining which changes constitute a threat to identity is rooted in its dynamic nature: since one's personality and concept of self is expected to change with time as a result of emotional development and lived experience, it is not easy to identify clear criteria and draw a line between acceptable shifts and problematic changes.  This becomes even harder when dealing with neurotechnologies aimed at influencing psychological processes—such as those designed to recude the symptoms of depression or post-traumatic stress disorder (PTSD) by modulating emotional states or saliency of memories to ease a patient's pain. Even helping a patient remember, which would seemingly help preserve identity, can be a delicate question: "Forgetting is also important to how a person navigates the world, since it allows the opportunity for both losing track of embarrassing or difficult memories, and focusing on future-oriented activity. Efforts to enhance identity through memory preservation thus run the risk of inadvertently damaging a valuable, if less consciously-driven cognitive process."

Agency

Although the nuances of its definition are debated in philosophy and sociology, agency is commonly understood as the individual's ability to consciously make and communicate a decision or choice. While identity and agency are distinct, an impairment in agency can in turn undermine personal identity: the subject may no longer be able to substantially modify their own self-narrative, and may therefore lose their ability to contribute to the dynamic process of identity formation.

The interplay between agency and neurotechnology can have implications for moral responsibility and legal liability. As with identity, devices aimed at treating some psychiatric conditions like depression or anorexia may work by modulating neural function linked with desire or motivation, potentially compromising the user's agency. This can also be the case, paradoxically, for those neurotechnologies designed to restore agency to patients, such as neural prostheses and BCI-mediated assistive technology like wheelchairs or computer accessibility tools. Because these devices often operate by interpreting sensory inputs or the user's neural data in order to estimate the individual's intention and respond according to it, estimation margins can lead to inaccurate or undesired responses that may threaten agency: "If the agent's intent and the device's output can come apart (think of how the auto-correct function in texting sometimes misinterprets the user's intent and sends problematic text messages), the user's sense of agency may be undermined."

Privacy

Finally, when these technologies are being developed society must understand that these neurotechnologies could reveal the one thing that people can always keep secret: what they are thinking. While there are large amounts of benefits associated with these technologies, it is necessary for scientists, citizens and policy makers alike to consider implications for privacy. This term is important in many ethical circles concerned with the state and goals of progress in the field of neurotechnology (see neuroethics). Current improvements such as "brain fingerprinting" or lie detection using EEG or fMRI could give rise to a set fixture of loci/emotional relationships in the brain, although these technologies are still years away from full application. It is important to consider how all these neurotechnologies might affect the future of society, and it is suggested that political, scientific, and civil debates are heard about the implementation of these newer technologies that potentially offer a new wealth of once-private information. Some ethicists are also concerned with the use of TMS and fear that the technique could be used to alter patients in ways that are undesired by the patient.

Cognitive liberty

Cognitive liberty refers to a suggested right to self-determination of individuals to control their own mental processes, cognition, and consciousness including by the use of various neurotechnologies and psychoactive substances. This perceived right is relevant for reformation and development of associated laws.

Interplanetary Internet

From Wikipedia, the free encyclopedia The speed of light, illustrated here by a beam of light traveling ...