Search This Blog

Sunday, May 1, 2022

Computational neuroscience

From Wikipedia, the free encyclopedia

Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematical models, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.

Computational neuroscience employs computational simulations to validate and solve mathematical models, and so can be seen as a sub-field of theoretical neuroscience; however, the two fields are often synonymous. The term mathematical neuroscience is also used sometimes, to stress the quantitative nature of the field.[6]

Computational neuroscience focuses on the description of biologically plausible neurons (and neural systems) and their physiology and dynamics, and it is therefore not directly concerned with biologically unrealistic models used in connectionism, control theory, cybernetics, quantitative psychology, machine learning, artificial neural networks, artificial intelligence and computational learning theory; although mutual inspiration exists and sometimes there is no strict limit between fields, with model abstraction in computational neuroscience depending on research scope and the granularity at which biological entities are analyzed.

Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling via network oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments.

History

The term 'computational neuroscience' was introduced by Eric L. Schwartz, who organized a conference, held in 1985 in Carmel, California, at the request of the Systems Development Foundation to provide a summary of the current status of a field which until that point was referred to by a variety of names, such as neural modeling, brain theory and neural networks. The proceedings of this definitional meeting were published in 1990 as the book Computational Neuroscience. The first of the annual open international meetings focused on Computational Neuroscience was organized by James M. Bower and John Miller in San Francisco, California in 1989. The first graduate educational program in computational neuroscience was organized as the Computational and Neural Systems Ph.D. program at the California Institute of Technology in 1985.

The early historical roots of the field can be traced to the work of people including Louis Lapicque, Hodgkin & Huxley, Hubel and Wiesel, and David Marr. Lapicque introduced the integrate and fire model of the neuron in a seminal article published in 1907, a model still popular for artificial neural networks studies because of its simplicity.

About 40 years later, Hodgkin & Huxley developed the voltage clamp and created the first biophysical model of the action potential. Hubel & Wiesel discovered that neurons in the primary visual cortex, the first cortical area to process information coming from the retina, have oriented receptive fields and are organized in columns. David Marr's work focused on the interactions between neurons, suggesting computational approaches to the study of how functional groups of neurons within the hippocampus and neocortex interact, store, process, and transmit information. Computational modeling of biophysically realistic neurons and dendrites began with the work of Wilfrid Rall, with the first multicompartmental model using cable theory.

Major topics

Research in computational neuroscience can be roughly categorized into several lines of inquiry. Most computational neuroscientists collaborate closely with experimentalists in analyzing novel data and synthesizing new models of biological phenomena.

Single-neuron modeling

Even a single neuron has complex biophysical characteristics and can perform computations (e.g.). Hodgkin and Huxley's original model only employed two voltage-sensitive currents (Voltage sensitive ion channels are glycoprotein molecules which extend through the lipid bilayer, allowing ions to traverse under certain conditions through the axolemma), the fast-acting sodium and the inward-rectifying potassium. Though successful in predicting the timing and qualitative features of the action potential, it nevertheless failed to predict a number of important features such as adaptation and shunting. Scientists now believe that there are a wide variety of voltage-sensitive currents, and the implications of the differing dynamics, modulations, and sensitivity of these currents is an important topic of computational neuroscience.

The computational functions of complex dendrites are also under intense investigation. There is a large body of literature regarding how different currents interact with geometric properties of neurons.

Some models are also tracking biochemical pathways at very small scales such as spines or synaptic clefts.

There are many software packages, such as GENESIS and NEURON, that allow rapid and systematic in silico modeling of realistic neurons. Blue Brain, a project founded by Henry Markram from the École Polytechnique Fédérale de Lausanne, aims to construct a biophysically detailed simulation of a cortical column on the Blue Gene supercomputer.

Modeling the richness of biophysical properties on the single-neuron scale can supply mechanisms that serve as the building blocks for network dynamics. However, detailed neuron descriptions are computationally expensive and this can handicap the pursuit of realistic network investigations, where many neurons need to be simulated. As a result, researchers that study large neural circuits typically represent each neuron and synapse with an artificially simple model, ignoring much of the biological detail. Hence there is a drive to produce simplified neuron models that can retain significant biological fidelity at a low computational overhead. Algorithms have been developed to produce faithful, faster running, simplified surrogate neuron models from computationally expensive, detailed neuron models.

Development, axonal patterning, and guidance

Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets? How do neurons migrate to the proper position in the central and peripheral systems? How do synapses form? We know from molecular biology that distinct parts of the nervous system release distinct chemical cues, from growth factors to hormones that modulate and influence the growth and development of functional connections between neurons.

Theoretical investigations into the formation and patterning of synaptic connection and morphology are still nascent. One hypothesis that has recently garnered some attention is the minimal wiring hypothesis, which postulates that the formation of axons and dendrites effectively minimizes resource allocation while maintaining maximal information storage.

Sensory processing

Early models on sensory processing understood within a theoretical framework are credited to Horace Barlow. Somewhat similar to the minimal wiring hypothesis described in the preceding section, Barlow understood the processing of the early sensory systems to be a form of efficient coding, where the neurons encoded information which minimized the number of spikes. Experimental and computational work have since supported this hypothesis in one form or another. For the example of visual processing, efficient coding is manifested in the forms of efficient spatial coding, color coding, temporal/motion coding, stereo coding, and combinations of them.

Further along the visual pathway, even the efficiently coded visual information is too much for the capacity of the information bottleneck, the visual attentional bottleneck. A subsequent theory, V1 Saliency Hypothesis (V1SH), has been developed on exogenous attentional selection of a fraction of visual input for further processing, guided by a bottom-up saliency map in the primary visual cortex.

Current research in sensory processing is divided among a biophysical modelling of different subsystems and a more theoretical modelling of perception. Current models of perception have suggested that the brain performs some form of Bayesian inference and integration of different sensory information in generating our perception of the physical world.

Motor control

Many models of the way the brain controls movement have been developed. This includes models of processing in the brain such as the cerebellum's role for error correction, skill learning in motor cortex and the basal ganglia, or the control of the vestibulo ocular reflex. This also includes many normative models, such as those of the Bayesian or optimal control flavor which are built on the idea that the brain efficiently solves its problems.

Memory and synaptic plasticity

Earlier models of memory are primarily based on the postulates of Hebbian learning. Biologically relevant models such as Hopfield net have been developed to address the properties of associative (also known as "content-addressable") style of memory that occur in biological systems. These attempts are primarily focusing on the formation of medium- and long-term memory, localizing in the hippocampus. Models of working memory, relying on theories of network oscillations and persistent activity, have been built to capture some features of the prefrontal cortex in context-related memory. Additional models look at the close relationship between the basal ganglia and the prefrontal cortex and how that contributes to working memory.

One of the major problems in neurophysiological memory is how it is maintained and changed through multiple time scales. Unstable synapses are easy to train but also prone to stochastic disruption. Stable synapses forget less easily, but they are also harder to consolidate. One recent computational hypothesis involves cascades of plasticity that allow synapses to function at multiple time scales. Stereochemically detailed models of the acetylcholine receptor-based synapse with the Monte Carlo method, working at the time scale of microseconds, have been built. It is likely that computational tools will contribute greatly to our understanding of how synapses function and change in relation to external stimulus in the coming decades.

Behaviors of networks

Biological neurons are connected to each other in a complex, recurrent fashion. These connections are, unlike most artificial neural networks, sparse and usually specific. It is not known how information is transmitted through such sparsely connected networks, although specific areas of the brain, such as the visual cortex, are understood in some detail. It is also unknown what the computational functions of these specific connectivity patterns are, if any.

The interactions of neurons in a small network can be often reduced to simple models such as the Ising model. The statistical mechanics of such simple systems are well-characterized theoretically. Some recent evidence suggests that dynamics of arbitrary neuronal networks can be reduced to pairwise interactions. It is not known, however, whether such descriptive dynamics impart any important computational function. With the emergence of two-photon microscopy and calcium imaging, we now have powerful experimental methods with which to test the new theories regarding neuronal networks.

In some cases the complex interactions between inhibitory and excitatory neurons can be simplified using mean-field theory, which gives rise to the population model of neural networks. While many neurotheorists prefer such models with reduced complexity, others argue that uncovering structural-functional relations depends on including as much neuronal and network structure as possible. Models of this type are typically built in large simulation platforms like GENESIS or NEURON. There have been some attempts to provide unified methods that bridge and integrate these levels of complexity.

Visual attention, identification, and categorization

Visual attention can be described as a set of mechanisms that limit some processing to a subset of incoming stimuli. Attentional mechanisms shape what we see and what we can act upon. They allow for concurrent selection of some (preferably, relevant) information and inhibition of other information. In order to have a more concrete specification of the mechanism underlying visual attention and the binding of features, a number of computational models have been proposed aiming to explain psychophysical findings. In general, all models postulate the existence of a saliency or priority map for registering the potentially interesting areas of the retinal input, and a gating mechanism for reducing the amount of incoming visual information, so that the limited computational resources of the brain can handle it. An example theory that is being extensively tested behaviorally and physiologically is the V1 Saliency Hypothesis that a bottom-up saliency map is created in the primary visual cortex to guide attention exogenously. Computational neuroscience provides a mathematical framework for studying the mechanisms involved in brain function and allows complete simulation and prediction of neuropsychological syndromes.

Cognition, discrimination, and learning

Computational modeling of higher cognitive functions has only recently begun. Experimental data comes primarily from single-unit recording in primates. The frontal lobe and parietal lobe function as integrators of information from multiple sensory modalities. There are some tentative ideas regarding how simple mutually inhibitory functional circuits in these areas may carry out biologically relevant computation.

The brain seems to be able to discriminate and adapt particularly well in certain contexts. For instance, human beings seem to have an enormous capacity for memorizing and recognizing faces. One of the key goals of computational neuroscience is to dissect how biological systems carry out these complex computations efficiently and potentially replicate these processes in building intelligent machines.

The brain's large-scale organizational principles are illuminated by many fields, including biology, psychology, and clinical practice. Integrative neuroscience attempts to consolidate these observations through unified descriptive models and databases of behavioral measures and recordings. These are the bases for some quantitative modeling of large-scale brain activity.

The Computational Representational Understanding of Mind (CRUM) is another attempt at modeling human cognition through simulated processes like acquired rule-based systems in decision making and the manipulation of visual representations in decision making.

Consciousness

One of the ultimate goals of psychology/neuroscience is to be able to explain the everyday experience of conscious life. Francis Crick, Giulio Tononi and Christof Koch made some attempts to formulate consistent frameworks for future work in neural correlates of consciousness (NCC), though much of the work in this field remains speculative. Specifically, Crick cautioned the field of neuroscience to not approach topics that are traditionally left to philosophy and religion.

Computational clinical neuroscience

Computational clinical neuroscience is a field that brings together experts in neuroscience, neurology, psychiatry, decision sciences and computational modeling to quantitatively define and investigate problems in neurological and psychiatric diseases, and to train scientists and clinicians that wish to apply these models to diagnosis and treatment.

Computational Psychiatry

Computational psychiatry is a new emerging field that brings together experts in machine learning, neuroscience, neurology, psychiatry, psychology to provide an understanding of psychiatric disorders.

Technology

Neuromorphic computing

A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations (See: neuromorphic computing, physical neural network). One of the advantages of using a physical model computer such as this is that it takes the computational load of the processor (in the sense that the structural and some of the functional elements don't have to be programmed since they are in hardware). In recent times, neuromorphic technology has been used to build supercomputers which are used in international neuroscience collaborations. Examples include the Human Brain Project SpiNNaker supercomputer and the BrainScaleS computer.

Variable-frequency drive

From Wikipedia, the free encyclopedia
 
Small variable-frequency drive
 
Chassis of above VFD (cover removed)

A variable-frequency drive (VFD) is a type of motor drive used in electro-mechanical drive systems to control AC motor speed and torque by varying motor input frequency and, depending on topology, to control associated voltage or current variation. VFDs may also be known as 'AFDs' (adjustable-frequency drives), 'ASDs' (adjustable-speed drives), 'VSDs' (variable-speed drives), 'AC drives', 'micro drives', 'inverter drives' or, simply, 'drives'.

VFDs are used in applications ranging from small appliances to large compressors. An increasing number of end users are showing greater interest in electric drive systems due to more stringent emission standards and demand for increased reliability and better availability. Systems using VFDs can be more efficient than those using throttling control of fluid flow, such as in systems with pumps and damper control for fans. However, the global market penetration for all applications of VFDs is relatively small.

Over the last four decades, power electronics technology has reduced VFD cost and size and has improved performance through advances in semiconductor switching devices, drive topologies, simulation and control techniques, and control hardware and software.

VFDs are made in a number of different low- and medium-voltage AC-AC and DC-AC topologies.

System description and operation

VFD system

A variable-frequency drive is a device used in a drive system consisting of the following three main sub-systems: AC motor, main drive controller assembly, and drive/operator interface.

AC motor

The AC electric motor used in a VFD system is usually a three-phase induction motor. Some types of single-phase motors or synchronous motors can be advantageous in some situations, but generally three-phase induction motors are preferred as the most economical. Motors that are designed for fixed-speed operation are often used. Elevated-voltage stresses imposed on induction motors that are supplied by VFDs require that such motors be designed for definite-purpose inverter-fed duty in accordance with such requirements as Part 31 of NEMA Standard MG-1.

Controller

The VFD controller is a solid-state power electronics conversion system consisting of three distinct sub-systems: a rectifier bridge converter, a direct current (DC) link, and an inverter. Voltage-source inverter (VSI) drives (see 'Generic topologies' sub-section below) are by far the most common type of drives. Most drives are AC-AC drives in that they convert AC line input to AC inverter output. However, in some applications such as common DC bus or solar applications, drives are configured as DC-AC drives. The most basic rectifier converter for the VSI drive is configured as a three-phase, six-pulse, full-wave diode bridge. In a VSI drive, the DC link consists of a capacitor which smooths out the converter's DC output ripple and provides a stiff input to the inverter. This filtered DC voltage is converted to quasi-sinusoidal AC voltage output using the inverter's active switching elements. VSI drives provide higher power factor and lower harmonic distortion than phase-controlled current-source inverter (CSI) and load-commutated inverter (LCI) drives (see 'Generic topologies' sub-section below). The drive controller can also be configured as a phase converter having single-phase converter input and three-phase inverter output.

Controller advances have exploited dramatic increases in the voltage and current ratings and switching frequency of solid-state power devices over the past six decades. Introduced in 1983, the insulated-gate bipolar transistor (IGBT) has in the past two decades come to dominate VFDs as an inverter switching device.

In variable-torque applications suited for Volts-per-Hertz (V/Hz) drive control, AC motor characteristics require that the voltage magnitude of the inverter's output to the motor be adjusted to match the required load torque in a linear V/Hz relationship. For example, for 460 V, 60 Hz motors, this linear V/Hz relationship is 460/60 = 7.67 V/Hz. While suitable in wide-ranging applications, V/Hz control is sub-optimal in high-performance applications involving low speed or demanding, dynamic speed regulation, positioning, and reversing load requirements. Some V/Hz control drives can also operate in quadratic V/Hz mode or can even be programmed to suit special multi-point V/Hz paths.

The two other drive control platforms, vector control and direct torque control (DTC), adjust the motor voltage magnitude, angle from reference, and frequency so as to precisely control the motor's magnetic flux and mechanical torque.

Although space vector pulse-width modulation (SVPWM) is becoming increasingly popular, sinusoidal PWM (SPWM) is the most straightforward method used to vary drives' motor voltage (or current) and frequency. With SPWM control (see Fig. 1), quasi-sinusoidal, variable-pulse-width output is constructed from intersections of a saw-toothed carrier signal with a modulating sinusoidal signal which is variable in operating frequency as well as in voltage (or current). Operation of the motors above rated nameplate speed (base speed) is possible, but is limited to conditions that do not require more power than the nameplate rating of the motor. This is sometimes called "field weakening" and, for AC motors, means operating at less than rated V/Hz and above rated nameplate speed. Permanent magnet synchronous motors have quite limited field-weakening speed range due to the constant magnet flux linkage. Wound-rotor synchronous motors and induction motors have much wider speed range. For example, a 100 HP, 460 V, 60 Hz, 1775 RPM (4-pole) induction motor supplied with 460 V, 75 Hz (6.134 V/Hz), would be limited to 60/75 = 80% torque at 125% speed (2218.75 RPM) = 100% power. At higher speeds, the induction motor torque has to be limited further due to the lowering of the breakaway torque of the motor. Thus, rated power can be typically produced only up to 130-150% of the rated nameplate speed. Wound-rotor synchronous motors can be run at even higher speeds. In rolling mill drives, often 200-300% of the base speed is used. The mechanical strength of the rotor limits the maximum speed of the motor.

Fig. 1: SPWM carrier-sine input & 2-level PWM output

An embedded microprocessor governs the overall operation of the VFD controller. Basic programming of the microprocessor is provided as user-inaccessible firmware. User programming of display, variable, and function block parameters is provided to control, protect, and monitor the VFD, motor, and driven equipment.

The basic drive controller can be configured to selectively include such optional power components and accessories as follows:

Operator interface

The operator interface provides a means for an operator to start and stop the motor and adjust the operating speed. The VFD may also be controlled by a programmable logic controller through Modbus or another similar interface. Additional operator control functions might include reversing, and switching between manual speed adjustment and automatic control from an external process control signal. The operator interface often includes an alphanumeric display or indication lights and meters to provide information about the operation of the drive. An operator interface keypad and display unit is often provided on the front of the VFD controller as shown in the photograph above. The keypad display can often be cable-connected and mounted a short distance from the VFD controller. Most are also provided with input and output (I/O) terminals for connecting push buttons, switches, and other operator interface devices or control signals. A serial communications port is also often available to allow the VFD to be configured, adjusted, monitored, and controlled using a computer.

Speed Control

There are two main ways to control the speed of a VFD; networked or hardwired. Networked involves transmitting the intended speed over a communication protocol such as Modbus, Modbus/TCP, EtherNet/IP, or via a keypad using Display Serial Interface while hardwired involves a pure electrical means of communication. Typical means of hardwired communication are: 4-20mA, 0-10VDC, or using the internal 24VDC power supply with a potentiometer. Speed can also be controlled remotely and locally. Remote control instructs the VFD to ignore speed commands from the keypad while local control instructs the VFD to ignore external control and only abide by the keypad.

Programming a VFD

Depending on the model a VFD's operating parameters can be programmed via: dedicated programming software, internal keypad, external keypad, or SD card. VFDs will often block out most programming changes while running. Typical parameters that need to be set include: motor nameplate information, speed reference source, on/off control source and braking control. It is also common for VFDs to provide debugging information such as fault codes and the states of the input signals.

Starting and Software Behavior

Most VFDs allow auto-starting to be enabled. Which will drive the output to a designated frequency after a power cycle, or after a fault has been cleared, or after the emergency stop signal has been restored (generally emergency stops are active low logic). One popular way to control a VFD is to enable auto-start and place L1, L2, and L3 into a contactor. Powering on the contactor thus turns on the drive and has it output to a designated speed. Depending on the sophistication of the drive multiple auto-starting behavior can be developed e.g. the drive auto-starts on power up but does not auto-start from clearing an emergency stop until a reset has been cycled.

Drive operation

Electric motor speed-torque chart

Referring to the accompanying chart, drive applications can be categorized as single-quadrant, two-quadrant, or four-quadrant; the chart's four quadrants are defined as follows:

  • Quadrant I - Driving or motoring, forward accelerating quadrant with positive speed and torque
  • Quadrant II - Generating or braking, forward braking-decelerating quadrant with positive speed and negative torque
  • Quadrant III - Driving or motoring, reverse accelerating quadrant with negative speed and torque
  • Quadrant IV - Generating or braking, reverse braking-decelerating quadrant with negative speed and positive torque.

Most applications involve single-quadrant loads operating in quadrant I, such as in variable-torque (e.g. centrifugal pumps or fans) and certain constant-torque (e.g. extruders) loads.

Certain applications involve two-quadrant loads operating in quadrant I and II where the speed is positive but the torque changes polarity as in case of a fan decelerating faster than natural mechanical losses. Some sources define two-quadrant drives as loads operating in quadrants I and III where the speed and torque is same (positive or negative) polarity in both directions.

Certain high-performance applications involve four-quadrant loads (Quadrants I to IV) where the speed and torque can be in any direction such as in hoists, elevators, and hilly conveyors. Regeneration can occur only in the drive's DC link bus when inverter voltage is smaller in magnitude than the motor back-EMF and inverter voltage and back-EMF are the same polarity.

In starting a motor, a VFD initially applies a low frequency and voltage, thus avoiding high inrush current associated with direct-on-line starting. After the start of the VFD, the applied frequency and voltage are increased at a controlled rate or ramped up to accelerate the load. This starting method typically allows a motor to develop 150% of its rated torque while the VFD is drawing less than 50% of its rated current from the mains in the low-speed range. A VFD can be adjusted to produce a steady 150% starting torque from standstill right up to full speed. However, motor cooling deteriorates and can result in overheating as speed decreases such that prolonged low-speed operation with significant torque is not usually possible without separately motorized fan ventilation.

With a VFD, the stopping sequence is just the opposite as the starting sequence. The frequency and voltage applied to the motor are ramped down at a controlled rate. When the frequency approaches zero, the motor is shut off. A small amount of braking torque is available to help decelerate the load a little faster than it would stop if the motor were simply switched off and allowed to coast. Additional braking torque can be obtained by adding a braking circuit (resistor controlled by a transistor) to dissipate the braking energy. With a four-quadrant rectifier (active front-end), the VFD is able to brake the load by applying a reverse torque and injecting the energy back to the AC line.

Benefits

Energy savings

Many fixed-speed motor load applications that are supplied direct from AC line power can save energy when they are operated at variable speed by means of VFD. Such energy cost savings are especially pronounced in variable-torque centrifugal fan and pump applications, where the load's torque and power vary with the square and cube, respectively, of the speed. This change gives a large power reduction compared to fixed-speed operation for a relatively small reduction in speed. For example, at 63% speed a motor load consumes only 25% of its full-speed power. This reduction is in accordance with affinity laws that define the relationship between various centrifugal load variables.

In the United States, an estimated 60-65% of electrical energy is used to supply motors, 75% of which are variable-torque fan, pump, and compressor loads. Eighteen percent of the energy used in the 40 million motors in the U.S. could be saved by efficient energy improvement technologies such as VFDs.

Only about 3% of the total installed base of AC motors are provided with AC drives. However, it is estimated that drive technology is adopted in as many as 30-40% of all newly installed motors.

An energy consumption breakdown of the global population of AC motor installations is as shown in the following table:

Global population of motors, 2009

Small General Purpose - Medium-Size Large
Power 10 W - 750 W 0.75 kW - 375 kW 375 kW - 10000 kW
Phase, voltage 1-ph., <240 V 3-ph., 200 V to 1 kV 3-ph., 1 kV to 20 kV
% total motor energy 9% 68% 23%
Total stock 2 billion 230 million 0.6 million

Control performance

AC drives are used to bring about process and quality improvements in industrial and commercial applications' acceleration, flow, monitoring, pressure, speed, temperature, tension, and torque.

Fixed-speed loads subject the motor to a high starting torque and to current surges that are up to eight times the full-load current. AC drives instead gradually ramp the motor up to operating speed to lessen mechanical and electrical stress, reducing maintenance and repair costs, and extending the life of the motor and the driven equipment.

Variable-speed drives can also run a motor in specialized patterns to further minimize mechanical and electrical stress. For example, an S-curve pattern can be applied to a conveyor application for smoother deceleration and acceleration control, which reduces the backlash that can occur when a conveyor is accelerating or decelerating.

Performance factors tending to favor the use of DC drives over AC drives include such requirements as continuous operation at low speed, four-quadrant operation with regeneration, frequent acceleration and deceleration routines, and need for the motor to be protected for a hazardous area. The following table compares AC and DC drives according to certain key parameters:

Drive type DC AC VFD AC VFD AC VFD AC VFD
Control platform Brush type DC V/Hz control Vector control Vector control Vector control
Control criteria Closed-loop Open-loop Open-loop Closed-loop Open-loop w. HFI^
Motor DC IM IM IM Interior PM
Typical speed regulation (%) 0.01 1 0.5 0.01 0.02
Typical speed range at constant torque (%) 0-100 10-100 3-100 0-100 0-100
Min. speed at 100% torque (% of base) Standstill 8% 2% Standstill Standstill (200%)
Multiple-motor operation recommended No Yes No No No
Fault protection (Fused only or inherent to drive) Fused only Inherent Inherent Inherent Inherent
Maintenance (Brushes) Low Low Low Low
Feedback device Tachometer or encoder N/A N/A Encoder N/A

^ High-frequency injection

VFD types and ratings

Generic topologies

Topology of VSI drive
 
Topology of CSI drive
 
Six-step drive waveforms
 
Topology of direct matrix converter

AC drives can be classified according to the following generic topologies:

  • Voltage-source inverter (VSI) drive topologies (see image): In a VSI drive, the DC output of the diode-bridge converter stores energy in the capacitor bus to supply stiff voltage input to the inverter. The vast majority of drives are VSI type with PWM voltage output.
  • Current-source inverter (CSI) drive topologies (see image): In a CSI drive, the DC output of the SCR-bridge converter stores energy in series-Inductor connection to supply stiff current input to the inverter. CSI drives can be operated with either PWM or six-step waveform output.
  • Six-step inverter drive topologies (see image): Now largely obsolete, six-step drives can be either VSI or CSI type and are also referred to as variable-voltage inverter drives, pulse-amplitude modulation (PAM) drives, square-wave drives or D.C. chopper inverter drives. In a six-step drive, the DC output of the SCR-bridge converter is smoothed via capacitor bus and series-reactor connection to supply via Darlington Pair or IGBT inverter quasi-sinusoidal, six-step voltage or current input to an induction motor.
  • Load commutated inverter (LCI) drive topologies: In an LCI drive (a special CSI case), the DC output of the SCR-bridge converter stores energy via DC link inductor circuit to supply stiff quasi-sinusoidal six-step current output of a second SCR-bridge's inverter and an over-excited synchronous machine.Low-cost SCR-thyristor-based LCI fed synchronous motor drives are often used in high-power low-dynamic-performance fan, pump and compressor applications rated up to 100 MW.
  • Cycloconverter or matrix converter (MC) topologies (see image): Cycloconverters and MCs are AC-AC converters that have no intermediate DC link for energy storage. A cycloconverter operates as a three-phase current source via three anti-parallel-connected SCR-bridges in six-pulse configuration, each cycloconverter phase acting selectively to convert fixed line frequency AC voltage to an alternating voltage at a variable load frequency. MC drives are IGBT-based.
  • Doubly fed slip recovery system topologies: A doubly fed slip recovery system feeds rectified slip power to a smoothing reactor to supply power to the AC supply network via an inverter, the speed of the motor being controlled by adjusting the DC current.

Control platforms

Most drives use one or more of the following control platforms:

Load torque and power characteristics

Variable-frequency drives are also categorized by the following load torque and power characteristics:

  • Variable torque, such as in centrifugal fan, pump, and blower applications
  • Constant torque, such as in conveyor and positive-displacement pump applications
  • Constant power, such as in machine tool and traction applications.

Available power ratings

VFDs are available with voltage and current ratings covering a wide range of single-phase and multi-phase AC motors. Low-voltage (LV) drives are designed to operate at output voltages equal to or less than 690 V. While motor-application LV drives are available in ratings of up to the order of 5 or 6 MW, economic considerations typically favor medium-voltage (MV) drives with much lower power ratings. Different MV drive topologies (see Table 2) are configured in accordance with the voltage/current-combination ratings used in different drive controllers' switching devices such that any given voltage rating is greater than or equal to one to the following standard nominal motor voltage ratings: generally either 2+34.16 kV (60 Hz) or 3+36.6 kV (50 Hz), with one thyristor manufacturer rated for up to 12 kV switching. In some applications a step-up transformer is placed between a LV drive and a MV motor load. MV drives are typically rated for motor applications greater than between about 375 and 750 kW (503 and 1,006 hp). MV drives have historically required considerably more application design effort than required for LV drive applications. The power rating of MV drives can reach 100 MW (130,000 hp), a range of different drive topologies being involved for different rating, performance, power quality, and reliability requirements.

Application considerations

AC line harmonics

Note of clarification:.

While harmonics in the PWM output can easily be filtered by carrier-frequency-related filter inductance to supply near-sinusoidal currents to the motor load, the VFD's diode-bridge rectifier converts AC line voltage to DC voltage output by super-imposing non-linear half-phase current pulses thus creating harmonic current distortion, and hence voltage distortion, of the AC line input. When the VFD loads are relatively small in comparison to the large, stiff power system available from the electric power company, the effects of VFD harmonic distortion of the AC grid can often be within acceptable limits. Furthermore, in low-voltage networks, harmonics caused by single-phase equipment such as computers and TVs are partially cancelled by three-phase diode bridge harmonics because their 5th and 7th harmonics are in counterphase. However, when the proportion of VFD and other non-linear load compared to total load or of non-linear load compared to the stiffness at the AC power supply, or both, is relatively large enough, the load can have a negative impact on the AC power waveform available to other power company customers in the same grid.

When the power company's voltage becomes distorted due to harmonics, losses in other loads such as normal fixed-speed AC motors are increased. This condition may lead to overheating and shorter operating life. Also, substation transformers and compensation capacitors are affected negatively. In particular, capacitors can cause resonance conditions that can unacceptably magnify harmonic levels. In order to limit the voltage distortion, owners of VFD load may be required to install filtering equipment to reduce harmonic distortion below acceptable limits. Alternatively, the utility may adopt a solution by installing filtering equipment of its own at substations affected by the large amount of VFD equipment being used. In high-power installations, harmonic distortion can be reduced by supplying multi-pulse rectifier-bridge VFDs from transformers with multiple phase-shifted windings.

It is also possible to replace the standard diode-bridge rectifier with a bi-directional IGBT switching device bridge mirroring the standard inverter which uses IGBT switching device output to the motor. Such rectifiers are referred to by various designations including active infeed converter (AIC), active rectifier, IGBT supply unit (ISU), active front end (AFE), or four-quadrant operation. With PWM control and a suitable input reactor, an AFE's AC line current waveform can be nearly sinusoidal. AFE inherently regenerates energy in four-quadrant mode from the DC side to the AC grid. Thus, no braking resistor is needed, and the efficiency of the drive is improved if the drive is frequently required to brake the motor.

Two other harmonics mitigation techniques exploit use of passive or active filters connected to a common bus with at least one VFD branch load on the bus. Passive filters involve the design of one or more low-pass LC filter traps, each trap being tuned as required to a harmonic frequency (5th, 7th, 11th, 13th, . . . kq+/-1, where k=integer, q=pulse number of converter).

It is very common practice for power companies or their customers to impose harmonic distortion limits based on IEC or IEEE standards. For example, IEEE Standard 519 limits at the customer's connection point call for the maximum individual frequency voltage harmonic to be no more than 3% of the fundamental and call for the voltage total harmonic distortion (THD) to be no more than 5% for a general AC power supply system.

Switching frequency foldback

One drive uses a default switching frequency setting of 4 kHz. Reducing the drive's switching frequency (the carrier-frequency) reduces the heat generated by the IGBTs.

A carrier frequency of at least ten times the desired output frequency is used to establish the PWM switching intervals. A carrier frequency in the range of 2,000 to 16,000 Hz is common for LV [low voltage, under 600 Volts AC] VFDs. A higher carrier frequency produces a better sine wave approximation but incurs higher switching losses in the IGBT, decreasing the overall power conversion efficiency.

Noise smoothing

Some drives have a noise smoothing feature that can be turned on to introduce a random variation to the switching frequency. This distributes the acoustic noise over a range of frequencies to lower the peak noise intensity.

Long-lead effects

The carrier-frequency pulsed output voltage of a PWM VFD cuses rapid rise times in these pulses, the transmission line effects of which must be considered. Since the transmission-line impedance of the cable and motor are different, pulses tend to reflect back from the motor terminals into the cable. The resulting reflections can produce overvoltages equal to twice the DC bus voltage or up to 3.1 times the rated line voltage for long cable runs, putting high stress on the cable and motor windings, and eventual insulation failure. Insulation standards for three-phase motors rated 230 V or less adequately protect against such long-lead overvoltages. On 460 V or 575 V systems and inverters with 3rd-generation 0.1-microsecond-rise-time IGBTs, the maximum recommended cable distance between VFD and motor is about 50 m or 150 feet. For emerging SiC MOSFET powered drives, significant overvoltages have been observed at cable lengths as short as 3 meters. Solutions to overvoltages caused by long lead lengths include minimizing cable length, lowering carrier frequency, installing dV/dt filters, using inverter-duty-rated motors (that are rated 600 V to withstand pulse trains with rise time less than or equal to 0.1 microsecond, of 1,600 V peak magnitude), and installing LCR low-pass sine wave filters. Selection of optimum PWM carrier frequency for AC drives involves balancing noise, heat, motor insulation stress, common-mode voltage-induced motor bearing current damage, smooth motor operation, and other factors. Further harmonics attenuation can be obtained by using an LCR low-pass sine wave filter or dV/dt filter.

Motor bearing currents

Carrier frequencies above 5 kHz are likely to cause bearing damage unless protective measures are taken.

PWM drives are inherently associated with high-frequency common-mode voltages and currents which may cause trouble with motor bearings. When these high-frequency voltages find a path to earth through a bearing, transfer of metal or electrical discharge machining (EDM) sparking occurs between the bearing's ball and the bearing's race. Over time, EDM-based sparking causes erosion in the bearing race that can be seen as a fluting pattern. In large motors, the stray capacitance of the windings provides paths for high-frequency currents that pass through the motor shaft ends, leading to a circulating type of bearing current. Poor grounding of motor stators can lead to shaft-to-ground bearing currents. Small motors with poorly grounded driven equipment are susceptible to high-frequency bearing currents.

Prevention of high-frequency bearing current damage uses three approaches: good cabling and grounding practices, interruption of bearing currents, and filtering or damping of common-mode currents for example through soft magnetic cores, the so-called inductive absorbers. Good cabling and grounding practices can include use of shielded, symmetrical-geometry power cable to supply the motor, installation of shaft grounding brushes, and conductive bearing grease. Bearing currents can be interrupted by installation of insulated bearings and specially designed electrostatic-shielded induction motors. Filtering and damping high-frequency bearing can be done though inserting soft magnetic cores over the three phases giving a high frequency impedance against the common mode or motor bearing currents. Another approach is to use instead of standard 2-level inverter drives, using either 3-level inverter drives or matrix converters.

Since inverter-fed motor cables' high-frequency current spikes can interfere with other cabling in facilities, such inverter-fed motor cables should not only be of shielded, symmetrical-geometry design but should also be routed at least 50 cm away from signal cables.

Dynamic braking

Torque generated by the drive causes the induction motor to run at synchronous speed less the slip. If the load drives the motor faster than synchronous speed, the motor acts as a generator, converting mechanical power back to electrical power. This power is returned to the drive's DC link element (capacitor or reactor). A DC-link-connected electronic power switch or braking DC chopper controls dissipation of this power as heat in a set of resistors. Cooling fans may be used to prevent resistor overheating.

Dynamic braking wastes braking energy by transforming it to heat. By contrast, regenerative drives recover braking energy by injecting this energy into the AC line. The capital cost of regenerative drives is, however, relatively high.

Regenerative drives

Line regenerative variable frequency drives, showing capacitors (top cylinders) and inductors attached, which filter the regenerated power.
 
Simplified Drive Schematic for a Popular EHV

Regenerative AC drives have the capacity to recover the braking energy of a load moving faster than the designated motor speed (an overhauling load) and return it to the power system.

Cycloconverter, Scherbius, matrix, CSI, and LCI drives inherently allow return of energy from the load to the line, while voltage-source inverters require an additional converter to return energy to the supply.

Regeneration is useful in VFDs only where the value of the recovered energy is large compared to the extra cost of a regenerative system, and if the system requires frequent braking and starting. Regenerative VFDs are widely used where speed control of overhauling loads is required.

Some examples:

  • Conveyor belt drives for manufacturing, which stop every few minutes. While stopped, parts are assembled correctly; once that is done, the belt moves on.
  • A crane, where the hoist motor stops and reverses frequently, and braking is required to slow the load during lowering.
  • Plug-in and hybrid electric vehicles of all types (see image and Hybrid Synergy Drive).

Libertarianism (metaphysics)

From Wikipedia, the free encyclopedia
 
The task of the metaphysical libertarian is to reconcile free will with indeterminism
 

Libertarianism is one of the main philosophical positions related to the problems of free will and determinism which are part of the larger domain of metaphysics. In particular, libertarianism is an incompatibilist position which argues that free will is logically incompatible with a deterministic universe. Libertarianism states that since agents have free will, determinism must be false.

One of the first clear formulations of libertarianism is found in John Duns Scotus. In theological context, metaphysical libertarianism was notably defended by Jesuit authors like Luis de Molina and Francisco Suárez against rather compatibilist Thomist Bañecianism. Other important metaphysical libertarians in the early modern period were René Descartes, George Berkeley, Immanuel Kant and Thomas Reid.

Roderick Chisholm was a prominent defender of libertarianism in the 20th century and contemporary libertarians include Robert Kane, Peter van Inwagen and Robert Nozick.

Overview

The first recorded use of the term libertarianism was in 1789 by William Belsham in a discussion of free will and in opposition to necessitarian or determinist views.

Metaphysical libertarianism is one philosophical viewpoint under that of incompatibilism. Libertarianism holds onto a concept of free will that requires the agent to be able to take more than one possible course of action under a given set of circumstances.

Accounts of libertarianism subdivide into non-physical theories and physical or naturalistic theories. Non-physical theories hold that the events in the brain that lead to the performance of actions do not have an entirely physical explanation, and consequently the world is not closed under physics. Such interactionist dualists believe that some non-physical mind, will, or soul overrides physical causality.

Explanations of libertarianism that do not involve dispensing with physicalism require physical indeterminism, such as probabilistic subatomic particle behavior – a theory unknown to many of the early writers on free will. Physical determinism, under the assumption of physicalism, implies there is only one possible future and is therefore not compatible with libertarian free will. Some libertarian explanations involve invoking panpsychism, the theory that a quality of mind is associated with all particles, and pervades the entire universe, in both animate and inanimate entities. Other approaches do not require free will to be a fundamental constituent of the universe; ordinary randomness is appealed to as supplying the "elbow room" believed to be necessary by libertarians.

Free volition is regarded as a particular kind of complex, high-level process with an element of indeterminism. An example of this kind of approach has been developed by Robert Kane, where he hypothesizes that,

In each case, the indeterminism is functioning as a hindrance or obstacle to her realizing one of her purposes—a hindrance or obstacle in the form of resistance within her will which has to be overcome by effort.

Although at the time quantum mechanics (and physical indeterminism) was only in the initial stages of acceptance, in his book Miracles: A preliminary study C. S. Lewis stated the logical possibility that if the physical world were proved indeterministic this would provide an entry point to describe an action of a non-physical entity on physical reality. Indeterministic physical models (particularly those involving quantum indeterminacy) introduce random occurrences at an atomic or subatomic level. These events might affect brain activity, and could seemingly allow incompatibilist free will if the apparent indeterminacy of some mental processes (for instance, subjective perceptions of control in conscious volition) map to the underlying indeterminacy of the physical construct. This relationship, however, requires a causative role over probabilities that is questionable, and it is far from established that brain activity responsible for human action can be affected by such events. Secondarily, these incompatibilist models are dependent upon the relationship between action and conscious volition, as studied in the neuroscience of free will. It is evident that observation may disturb the outcome of the observation itself, rendering limited our ability to identify causality. Niels Bohr, one of the main architects of quantum theory, suggested, however, that no connection could be made between indeterminism of nature and freedom of will.

Agent-causal theories

In non-physical theories of free will, agents are assumed to have power to intervene in the physical world, a view known as agent causation. Proponents of agent causation include George Berkeley, Thomas Reid, and Roderick Chisholm.

Most events can be explained as the effects of prior events. When a tree falls, it does so because of the force of the wind, its own structural weakness, and so on. However, when a person performs a free act, agent causation theorists say that the action was not caused by any other events or states of affairs, but rather was caused by the agent. Agent causation is ontologically separate from event causation. The action was not uncaused, because the agent caused it. But the agent's causing it was not determined by the agent's character, desires, or past, since that would just be event causation. As Chisholm explains it, humans have "a prerogative which some would attribute only to God: each of us, when we act, is a prime mover unmoved. In doing what we do, we cause certain events to happen, and nothing – or no one – causes us to cause those events to happen."

This theory involves a difficulty which has long been associated with the idea of an unmoved mover. If a free action was not caused by any event, such as a change in the agent or an act of the will, then what is the difference between saying that an agent caused the event and simply saying that the event happened on its own? As William James put it, "If a 'free' act be a sheer novelty, that comes not from me, the previous me, but ex nihilo, and simply tacks itself on to me, how can I, the previous I, be responsible? How can I have any permanent character that will stand still long enough for praise or blame to be awarded?"

Agent causation advocates respond that agent causation is actually more intuitive than event causation. They point to David Hume's argument that when we see two events happen in succession, our belief that one event caused the other cannot be justified rationally (known as the problem of induction). If that is so, where does our belief in causality come from? According to Thomas Reid, "the conception of an efficient cause may very probably be derived from the experience we have had...of our own power to produce certain effects." Our everyday experiences of agent causation provide the basis for the idea of event causation.

Event-causal theories

Event-causal accounts of incompatibilist free will typically rely upon physicalist models of mind (like those of the compatibilist), yet they presuppose physical indeterminism, in which certain indeterministic events are said to be caused by the agent. A number of event-causal accounts of free will have been created, referenced here as deliberative indeterminism, centred accounts, and efforts of will theory. The first two accounts do not require free will to be a fundamental constituent of the universe. Ordinary randomness is appealed to as supplying the "elbow room" that libertarians believe necessary. A first common objection to event-causal accounts is that the indeterminism could be destructive and could therefore diminish control by the agent rather than provide it (related to the problem of origination). A second common objection to these models is that it is questionable whether such indeterminism could add any value to deliberation over that which is already present in a deterministic world.

Deliberative indeterminism asserts that the indeterminism is confined to an earlier stage in the decision process. This is intended to provide an indeterminate set of possibilities to choose from, while not risking the introduction of luck (random decision making). The selection process is deterministic, although it may be based on earlier preferences established by the same process. Deliberative indeterminism has been referenced by Daniel Dennett and John Martin Fischer. An obvious objection to such a view is that an agent cannot be assigned ownership over their decisions (or preferences used to make those decisions) to any greater degree than that of a compatibilist model.

Centred accounts propose that for any given decision between two possibilities, the strength of reason will be considered for each option, yet there is still a probability the weaker candidate will be chosen. An obvious objection to such a view is that decisions are explicitly left up to chance, and origination or responsibility cannot be assigned for any given decision.

Efforts of will theory is related to the role of will power in decision making. It suggests that the indeterminacy of agent volition processes could map to the indeterminacy of certain physical events – and the outcomes of these events could therefore be considered caused by the agent. Models of volition have been constructed in which it is seen as a particular kind of complex, high-level process with an element of physical indeterminism. An example of this approach is that of Robert Kane, where he hypothesizes that "in each case, the indeterminism is functioning as a hindrance or obstacle to her realizing one of her purposes – a hindrance or obstacle in the form of resistance within her will which must be overcome by effort." According to Robert Kane such "ultimate responsibility" is a required condition for free will. An important factor in such a theory is that the agent cannot be reduced to physical neuronal events, but rather mental processes are said to provide an equally valid account of the determination of outcome as their physical processes (see non-reductive physicalism).

Epicurus

Epicurus, an ancient Hellenistic philosopher, argued that as atoms moved through the void, there were occasions when they would "swerve" (clinamen) from their otherwise determined paths, thus initiating new causal chains. Epicurus argued that these swerves would allow us to be more responsible for our actions, something impossible if every action was deterministically caused.

Epicurus did not say the swerve was directly involved in decisions. But following Aristotle, Epicurus thought human agents have the autonomous ability to transcend necessity and chance (both of which destroy responsibility), so that praise and blame are appropriate. Epicurus finds a tertium quid, beyond necessity and beyond chance. His tertium quid is agent autonomy, what is "up to us."

[S]ome things happen of necessity (ἀνάγκη), others by chance (τύχη), others through our own agency (παρ’ ἡμᾶς). [...]. [N]ecessity destroys responsibility and chance is inconstant; whereas our own actions are autonomous, and it is to them that praise and blame naturally attach.

The Epicurean philosopher Lucretius (1st century BC) saw the randomness as enabling free will, even if he could not explain exactly how, beyond the fact that random swerves would break the causal chain of determinism.

Again, if all motion is always one long chain, and new motion arises out of the old in order invariable, and if the first-beginnings do not make by swerving a beginning of motion such as to break the decrees of fate, that cause may not follow cause from infinity, whence comes this freedom (libera) in living creatures all over the earth, whence I say is this will (voluntas) wrested from the fates by which we proceed whither pleasure leads each, swerving also our motions not at fixed times and fixed places, but just where our mind has taken us? For undoubtedly it is his own will in each that begins these things, and from the will movements go rippling through the limbs.

However, the interpretation of these ancient philosophers is controversial. Tim O'Keefe has argued that Epicurus and Lucretius were not libertarians at all, but compatibilists.

Robert Nozick

Robert Nozick put forward an indeterministic theory of free will in Philosophical Explanations (1981).

When human beings become agents through reflexive self-awareness, they express their agency by having reasons for acting, to which they assign weights. Choosing the dimensions of one's identity is a special case, in which the assigning of weight to a dimension is partly self-constitutive. But all acting for reasons is constitutive of the self in a broader sense, namely, by its shaping one's character and personality in a manner analogous to the shaping that law undergoes through the precedent set by earlier court decisions. Just as a judge does not merely apply the law but to some degree makes it through judicial discretion, so too a person does not merely discover weights but assigns them; one not only weighs reasons but also weights them. Set in train is a process of building a framework for future decisions that we are tentatively committed to.

The lifelong process of self-definition in this broader sense is construed indeterministically by Nozick. The weighting is "up to us" in the sense that it is undetermined by antecedent causal factors, even though subsequent action is fully caused by the reasons one has accepted. He compares assigning weights in this deterministic sense to "the currently orthodox interpretation of quantum mechanics", following von Neumann in understanding a quantum mechanical system as in a superposition or probability mixture of states, which changes continuously in accordance with quantum mechanical equations of motion and discontinuously via measurement or observation that "collapses the wave packet" from a superposition to a particular state. Analogously, a person before decision has reasons without fixed weights: he is in a superposition of weights. The process of decision reduces the superposition to a particular state that causes action.

Robert Kane

One particularly influential contemporary theory of libertarian free will is that of Robert Kane. Kane argues that "(1) the existence of alternative possibilities (or the agent's power to do otherwise) is a necessary condition for acting freely, and that (2) determinism is not compatible with alternative possibilities (it precludes the power to do otherwise)". It is important to note that the crux of Kane's position is grounded not in a defense of alternative possibilities (AP) but in the notion of what Kane refers to as ultimate responsibility (UR). Thus, AP is a necessary but insufficient criterion for free will. It is necessary that there be (metaphysically) real alternatives for our actions, but that is not enough; our actions could be random without being in our control. The control is found in "ultimate responsibility".

Ultimate responsibility entails that agents must be the ultimate creators (or originators) and sustainers of their own ends and purposes. There must be more than one way for a person's life to turn out (AP). More importantly, whichever way it turns out must be based in the person's willing actions. Kane defines it as follows:

(UR) An agent is ultimately responsible for some (event or state) E's occurring only if (R) the agent is personally responsible for E's occurring in a sense which entails that something the agent voluntarily (or willingly) did or omitted either was, or causally contributed to, E's occurrence and made a difference to whether or not E occurred; and (U) for every X and Y (where X and Y represent occurrences of events and/or states) if the agent is personally responsible for X and if Y is an arche (sufficient condition, cause or motive) for X, then the agent must also be personally responsible for Y.

In short, "an agent must be responsible for anything that is a sufficient reason (condition, cause or motive) for the action's occurring."

What allows for ultimacy of creation in Kane's picture are what he refers to as "self-forming actions" or SFAs—those moments of indecision during which people experience conflicting wills. These SFAs are the undetermined, regress-stopping voluntary actions or refraining in the life histories of agents that are required for UR. UR does not require that every act done of our own free will be undetermined and thus that, for every act or choice, we could have done otherwise; it requires only that certain of our choices and actions be undetermined (and thus that we could have done otherwise), namely SFAs. These form our character or nature; they inform our future choices, reasons and motivations in action. If a person has had the opportunity to make a character-forming decision (SFA), they are responsible for the actions that are a result of their character.

Critique

Randolph Clarke objects that Kane's depiction of free will is not truly libertarian but rather a form of compatibilism. The objection asserts that although the outcome of an SFA is not determined, one's history up to the event is; so the fact that an SFA will occur is also determined. The outcome of the SFA is based on chance, and from that point on one's life is determined. This kind of freedom, says Clarke, is no different from the kind of freedom argued for by compatibilists, who assert that even though our actions are determined, they are free because they are in accordance with our own wills, much like the outcome of an SFA.

Kane responds that the difference between causal indeterminism and compatibilism is "ultimate control—the originative control exercised by agents when it is 'up to them' which of a set of possible choices or actions will now occur, and up to no one and nothing else over which the agents themselves do not also have control". UR assures that the sufficient conditions for one's actions do not lie before one's own birth.

Galen Strawson holds that there is a fundamental sense in which free will is impossible, whether determinism is true or not. He argues for this position with what he calls his "basic argument", which aims to show that no-one is ever ultimately morally responsible for their actions, and hence that no one has free will in the sense that usually concerns us.

In his book defending compatibilism, Freedom Evolves, Daniel Dennett spends a chapter criticising Kane's theory. Kane believes freedom is based on certain rare and exceptional events, which he calls self-forming actions or SFA's. Dennett notes that there is no guarantee such an event will occur in an individual's life. If it does not, the individual does not in fact have free will at all, according to Kane. Yet they will seem the same as anyone else. Dennett finds an essentially indetectable notion of free will to be incredible.

Significant other

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Sig...