Search This Blog

Thursday, September 21, 2023

Significant figures

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Significant_figures

Significant figures, also referred to as significant digits or sig figs, are specific digits within a number written in positional notation that carry both reliability and necessity in conveying a particular quantity. When presenting the outcome of a measurement (such as length, pressure, volume, or mass), if the number of digits exceeds what the measurement instrument can resolve, only the number of digits within the resolution's capability are dependable and therefore considered significant.

For instance, if a length measurement yields 114.8 mm, using a ruler with the smallest interval between marks at 1 mm, the first three digits (1, 1, and 4, representing 114 mm) are certain and constitute significant figures. Even digits that are uncertain yet reliable are also included in the significant figures. In this scenario, the last digit (8, contributing 0.8 mm) is likewise considered significant despite its uncertainty.

Another example involves a volume measurement of 2.98 L with an uncertainty of ± 0.05 L. The actual volume falls between 2.93 L and 3.03 L. Even if certain digits are not completely known, they are still significant if they are reliable, as they indicate the actual volume within an acceptable range of uncertainty. In this case, the actual volume might be 2.94 L or possibly 3.02 L, so all three digits are considered significant.

The following types of digits are not considered significant:

  • Leading zeros. For instance, 013 kg has two significant figures—1 and 3—while the leading zero is insignificant since it does not impact the mass indication; 013 kg is equivalent to 13 kg, rendering the zero unnecessary. Similarly, in the case of 0.056 m, there are two insignificant leading zeros since 0.056 m is the same as 56 mm, thus the leading zeros do not contribute to the length indication.
  • Trailing zeros when they serve as placeholders. In the measurement 1500 m, the trailing zeros are insignificant if they simply stand for the tens and ones places (assuming the measurement resolution is 100 m). In this instance, 1500 m indicates the length is approximately 1500 m rather than an exact value of 1500 m.
  • Spurious digits that arise from calculations resulting in a higher precision than the original data or a measurement reported with greater precision than the instrument's resolution.

Among a number's significant figures, the most significant is the digit with the greatest exponent value (the leftmost significant figure), while the least significant is the digit with the lowest exponent value (the rightmost significant figure). For example, in the number "123," the "1" is the most significant figure, representing hundreds (102), while the "3" is the least significant figure, representing ones (100).

To avoid conveying a misleading level of precision, numbers are often rounded. For instance, it would create false precision to present a measurement as 12.34525 kg when the measuring instrument only provides accuracy to the nearest gram (0.001 kg). In this case, the significant figures are the first five digits (1, 2, 3, 4, and 5) from the leftmost digit, and the number should be rounded to these significant figures, resulting in 12.345 kg as the accurate value. The rounding error (in this example, 0.00025 kg = 0.25 g) approximates the numerical resolution or precision. Numbers can also be rounded for simplicity, not necessarily to indicate measurement precision, such as for the sake of expediency in news broadcasts.

Significance arithmetic encompasses a set of approximate rules for preserving significance through calculations. More advanced scientific rules are known as the propagation of uncertainty.

Radix 10 (base-10, decimal numbers) is assumed in the following. (See unit in the last place for extending these concepts to other bases.)

Identifying significant figures

Rules to identify significant figures in a number

Digits in light blue are significant figures; those in black are not.

Note that identifying the significant figures in a number requires knowing which digits are reliable (e.g., by knowing the measurement or reporting resolution with which the number is obtained or processed) since only reliable digits can be significant; e.g., 3 and 4 in 0.00234 g are not significant if the measurable smallest weight is 0.001 g.

  • Non-zero digits within the given measurement or reporting resolution are significant.
    • 91 has two significant figures (9 and 1) if they are measurement-allowed digits.
    • 123.45 has five significant digits (1, 2, 3, 4 and 5) if they are within the measurement resolution. If the resolution is 0.1, then the last digit 5 is not significant.
  • Zeros between two significant non-zero digits are significant (significant trapped zeros).
    • 101.12003 consists of eight significant figures if the resolution is to 0.00001.
    • 125.340006 has seven significant figures if the resolution is to 0.0001: 1, 2, 5, 3, 4, 0, and 0.
  • Zeros to the left of the first non-zero digit (leading zeros) are not significant.
    • If a length measurement gives 0.052 km, then 0.052 km = 52 m so 5 and 2 are only significant; the leading zeros appear or disappear, depending on which unit is used, so they are not necessary to indicate the measurement scale.
    • 0.00034 has 2 significant figures (3 and 4) if the resolution is 0.00001.
  • Zeros to the right of the last non-zero digit (trailing zeros) in a number with the decimal point are significant if they are within the measurement or reporting resolution.
    • 1.200 has four significant figures (1, 2, 0, and 0) if they are allowed by the measurement resolution.
    • 0.0980 has three significant digits (9, 8, and the last zero) if they are within the measurement resolution.
    • 120.000 consists of six significant figures (1, 2, and the four subsequent zeroes) if, as before, they are within the measurement resolution.
  • Trailing zeros in an integer may or may not be significant, depending on the measurement or reporting resolution.
    • 45,600 has 3, 4 or 5 significant figures depending on how the last zeros are used. For example, if the length of a road is reported as 45600 m without information about the reporting or measurement resolution, then it is not clear if the road length is precisely measured as 45600 m or if it is a rough estimate. If it is the rough estimation, then only the first three non-zero digits are significant since the trailing zeros are neither reliable nor necessary; 45600 m can be expressed as 45.6 km or as 4.56 × 104 m in scientific notation, and neither expression requires the trailing zeros.
  • An exact number has an infinite number of significant figures.
    • If the number of apples in a bag is 4 (exact number), then this number is 4.0000... (with infinite trailing zeros to the right of the decimal point). As a result, 4 does not impact the number of significant figures or digits in the result of calculations with it.
  • A mathematical or physical constant has significant figures to its known digits.
    • π is a specific real number with several equivalent definitions. All of the digits in its exact decimal expansion 3.14159265358979323... are significant. Although many properties of these digits are known — for example, they do not repeat, because π is irrational — not all of the digits are known. As of 19 August 2021, more than 62 trillion digits have been calculated. A 62 trillion-digit approximation has 62 trillion significant digits. In practical applications, far fewer digits are used. The everyday approximation 3.14 has three significant decimal digits and 7 correct binary digits. The approximation 22/7 has the same three correct decimal digits but has 10 correct binary digits. Most calculators and computer programs can handle the 16-digit expansion 3.141592653589793, which is sufficient for interplanetary navigation calculations.
    • The Planck constant is and is defined as an exact value so that it is more properly defined as .

Ways to denote significant figures in an integer with trailing zeros

The significance of trailing zeros in a number not containing a decimal point can be ambiguous. For example, it may not always be clear if the number 1300 is precise to the nearest unit (just happens coincidentally to be an exact multiple of a hundred) or if it is only shown to the nearest hundreds due to rounding or uncertainty. Many conventions exist to address this issue. However, these are not universally used and would only be effective if the reader is familiar with the convention:

  • An overline, sometimes also called an overbar, or less accurately, a vinculum, may be placed over the last significant figure; any trailing zeros following this are insignificant. For example, 1300 has three significant figures (and hence indicates that the number is precise to the nearest ten).
  • Less often, using a closely related convention, the last significant figure of a number may be underlined; for example, "1300" has two significant figures.
  • A decimal point may be placed after the number; for example "1300." indicates specifically that trailing zeros are meant to be significant.

As the conventions above are not in general use, the following more widely recognized options are available for indicating the significance of number with trailing zeros:

  • Eliminate ambiguous or non-significant zeros by changing the unit prefix in a number with a unit of measurement. For example, the precision of measurement specified as 1300 g is ambiguous, while if stated as 1.30 kg it is not. Likewise 0.0123 L can be rewritten as 12.3 mL
  • Eliminate ambiguous or non-significant zeros by using Scientific Notation: For example, 1300 with three significant figures becomes 1.30×103. Likewise 0.0123 can be rewritten as 1.23×10−2. The part of the representation that contains the significant figures (1.30 or 1.23) is known as the significand or mantissa. The digits in the base and exponent (103 or 10−2) are considered exact numbers so for these digits, significant figures are irrelevant.
  • Explicitly state the number of significant figures (the abbreviation s.f. is sometimes used): For example "20 000 to 2 s.f." or "20 000 (2 sf)".
  • State the expected variability (precision) explicitly with a plus–minus sign, as in 20 000 ± 1%. This also allows specifying a range of precision in-between powers of ten.

Rounding to significant figures

Rounding to significant figures is a more general-purpose technique than rounding to n digits, since it handles numbers of different scales in a uniform way. For example, the population of a city might only be known to the nearest thousand and be stated as 52,000, while the population of a country might only be known to the nearest million and be stated as 52,000,000. The former might be in error by hundreds, and the latter might be in error by hundreds of thousands, but both have two significant figures (5 and 2). This reflects the fact that the significance of the error is the same in both cases, relative to the size of the quantity being measured.

To round a number to n significant figures:

  1. If the n + 1 digit is greater than 5 or is 5 followed by other non-zero digits, add 1 to the n digit. For example, if we want to round 1.2459 to 3 significant figures, then this step results in 1.25.
  2. If the n + 1 digit is 5 not followed by other digits or followed by only zeros, then rounding requires a tie-breaking rule. For example, to round 1.25 to 2 significant figures:
    • Round half away from zero (also known as "5/4") rounds up to 1.3. This is the default rounding method implied in many disciplines if the required rounding method is not specified.
    • Round half to even, which rounds to the nearest even number. With this method, 1.25 is rounded down to 1.2. If this method applies to 1.35, then it is rounded up to 1.4. This is the method preferred by many scientific disciplines, because, for example, it avoids skewing the average value of a long list of values upwards.
  3. For an integer in rounding, replace the digits after the n digit with zeros. For example, if 1254 is rounded to 2 significant figures, then 5 and 4 are replaced to 0 so that it will be 1300. For a number with the decimal point in rounding, remove the digits after the n digit. For example, if 14.895 is rounded to 3 significant figures, then the digits after 8 are removed so that it will be 14.9.

In financial calculations, a number is often rounded to a given number of places. For example, to two places after the decimal separator for many world currencies. This is done because greater precision is immaterial, and usually it is not possible to settle a debt of less than the smallest currency unit.

In UK personal tax returns, income is rounded down to the nearest pound, whilst tax paid is calculated to the nearest penny.

As an illustration, the decimal quantity 12.345 can be expressed with various numbers of significant figures or decimal places. If insufficient precision is available then the number is rounded in some manner to fit the available precision. The following table shows the results for various total precision at two rounding ways (N/A stands for Not Applicable).

Precision Rounded to
significant figures
Rounded to
decimal places
6 12.3450 12.345000
5 12.345 12.34500
4 12.34 or 12.35 12.3450
3 12.3 12.345
2 12 12.34 or 12.35
1 10 12.3
0 12

Another example for 0.012345. (Remember that the leading zeros are not significant.)

Precision Rounded to
significant figures
Rounded to
decimal places
7 0.01234500 0.0123450
6 0.0123450 0.012345
5 0.012345 0.01234 or 0.01235
4 0.01234 or 0.01235 0.0123
3 0.0123 0.012
2 0.012 0.01
1 0.01 0.0
0 0

The representation of a non-zero number x to a precision of p significant digits has a numerical value that is given by the formula:

where

which may need to be written with a specific marking as detailed above to specify the number of significant trailing zeros.

Writing uncertainty and implied uncertainty

Significant figures in writing uncertainty

It is recommended for a measurement result to include the measurement uncertainty such as , where xbest and σx are the best estimate and uncertainty in the measurement respectively. xbest can be the average of measured values and σx can be the standard deviation or a multiple of the measurement deviation. The rules to write are:

  • σx has only one or two significant figures as more precise uncertainty has no meaning.
    • 1.79 ± 0.06 (correct), 1.79 ± 0.96 (correct), 1.79 ± 1.96 (incorrect).
  • The digit positions of the last significant figures in xbest and σx are the same, otherwise the consistency is lost. For example, in 1.79 ± 0.067 (incorrect), it does not make sense to have more accurate uncertainty than the best estimate. 1.79 ± 0.9 (incorrect) also does not make sense since the rounding guideline for addition and subtraction below tells that the edges of the true value range are 2.7 and 0.9, that are less accurate than the best estimate.
    • 1.79 ± 0.06 (correct), 1.79 ± 0.96 (correct), 1.79 ± 0.067 (incorrect), 1.79 ± 0.9 (incorrect).

Implied uncertainty

In chemistry (and may also be for other scientific branches), uncertainty may be implied by the last significant figure if it is not explicitly expressed. The implied uncertainty is ± the half of the minimum scale at the last significant figure position. For example, if the volume of water in a bottle is reported as 3.78 L without mentioning uncertainty, then ± 0.005 L measurement uncertainty may be implied. If 2.97 ± 0.07 kg, so the actual weight is somewhere in 2.90 to 3.04 kg, is measured and it is desired to report it with a single number, then 3.0 kg is the best number to report since its implied uncertainty ± 0.05 kg tells the weight range of 2.95 to 3.05 kg that is close to the measurement range. If 2.97 ± 0.09 kg, then 3.0 kg is still the best since, if 3 kg is reported then its implied uncertainty ± 0.5 tells the range of 2.5 to 3.5 kg that is too wide in comparison with the measurement range.

If there is a need to write the implied uncertainty of a number, then it can be written as with stating it as the implied uncertainty (to prevent readers from recognizing it as the measurement uncertainty), where x and σx are the number with an extra zero digit (to follow the rules to write uncertainty above) and the implied uncertainty of it respectively. For example, 6 kg with the implied uncertainty ± 0.5 kg can be stated as 6.0 ± 0.5 kg.

Arithmetic

As there are rules to determine the significant figures in directly measured quantities, there are also guidelines (not rules) to determine the significant figures in quantities calculated from these measured quantities.

Significant figures in measured quantities are most important in the determination of significant figures in calculated quantities with them. A mathematical or physical constant (e.g., π in the formula for the area of a circle with radius r as πr2) has no effect on the determination of the significant figures in the result of a calculation with it if its known digits are equal to or more than the significant figures in the measured quantities used in the calculation. An exact number such as ½ in the formula for the kinetic energy of a mass m with velocity v as ½mv2 has no bearing on the significant figures in the calculated kinetic energy since its number of significant figures is infinite (0.500000...).

The guidelines described below are intended to avoid a calculation result more precise than the measured quantities, but it does not ensure the resulted implied uncertainty close enough to the measured uncertainties. This problem can be seen in unit conversion. If the guidelines give the implied uncertainty too far from the measured ones, then it may be needed to decide significant digits that give comparable uncertainty.

Multiplication and division

For quantities created from measured quantities via multiplication and division, the calculated result should have as many significant figures as the least number of significant figures among the measured quantities used in the calculation. For example,

  • 1.234 × 2 = 2.468 ≈ 2
  • 1.234 × 2.0 = 2.468 ≈ 2.5
  • 0.01234 × 2 = 0.02468 ≈ 0.02

with one, two, and one significant figures respectively. (2 here is assumed not an exact number.) For the first example, the first multiplication factor has four significant figures and the second has one significant figure. The factor with the fewest or least significant figures is the second one with only one, so the final calculated result should also have one significant figure.

Exception

For unit conversion, the implied uncertainty of the result can be unsatisfactorily higher than that in the previous unit if this rounding guideline is followed; For example, 8 inch has the implied uncertainty of ± 0.5 inch = ± 1.27 cm. If it is converted to the centimeter scale and the rounding guideline for multiplication and division is followed, then 20.32 cm ≈ 20 cm with the implied uncertainty of ± 5 cm. If this implied uncertainty is considered as too overestimated, then more proper significant digits in the unit conversion result may be 20.32 cm ≈ 20. cm with the implied uncertainty of ± 0.5 cm.

Another exception of applying the above rounding guideline is to multiply a number by an integer, such as 1.234 × 9. If the above guideline is followed, then the result is rounded as 1.234 × 9.000.... = 11.106 ≈ 11.11. However, this multiplication is essentially adding 1.234 to itself 9 times such as 1.234 + 1.234 + … + 1.234 so the rounding guideline for addition and subtraction described below is more proper rounding approach. As a result, the final answer is 1.234 + 1.234 + … + 1.234 = 11.106 = 11.106 (one significant digit increase).

Addition and subtraction

For quantities created from measured quantities via addition and subtraction, the last significant figure position (e.g., hundreds, tens, ones, tenths, hundredths, and so forth) in the calculated result should be the same as the leftmost or largest digit position among the last significant figures of the measured quantities in the calculation. For example,

  • 1.234 + 2 = 3.234 ≈ 3
  • 1.234 + 2.0 = 3.234 ≈ 3.2
  • 0.01234 + 2 = 2.01234 ≈ 2

with the last significant figures in the ones place, tenths place, and ones place respectively. (2 here is assumed not an exact number.) For the first example, the first term has its last significant figure in the thousandths place and the second term has its last significant figure in the ones place. The leftmost or largest digit position among the last significant figures of these terms is the ones place, so the calculated result should also have its last significant figure in the ones place.

The rule to calculate significant figures for multiplication and division are not the same as the rule for addition and subtraction. For multiplication and division, only the total number of significant figures in each of the factors in the calculation matters; the digit position of the last significant figure in each factor is irrelevant. For addition and subtraction, only the digit position of the last significant figure in each of the terms in the calculation matters; the total number of significant figures in each term is irrelevant.[citation needed] However, greater accuracy will often be obtained if some non-significant digits are maintained in intermediate results which are used in subsequent calculations.

Logarithm and antilogarithm

The base-10 logarithm of a normalized number (i.e., a × 10b with 1 ≤ a < 10 and b as an integer), is rounded such that its decimal part (called mantissa) has as many significant figures as the significant figures in the normalized number.

  • log10(3.000 × 104) = log10(104) + log10(3.000) = 4.000000... (exact number so infinite significant digits) + 0.4771212547... = 4.4771212547 ≈ 4.4771.

When taking the antilogarithm of a normalized number, the result is rounded to have as many significant figures as the significant figures in the decimal part of the number to be antiloged.

  • 104.4771 = 29998.5318119... = 30000 = 3.000 × 104.

Transcendental functions

If a transcendental function (e.g., the exponential function, the logarithm, and the trigonometric functions) is differentiable at its domain element x, then its number of significant figures (denoted as "significant figures of ") is approximately related with the number of significant figures in x (denoted as "significant figures of x") by the formula

,

where is the condition number. See the significance arithmetic article to find its derivation.

Round only on the final calculation result

When performing multiple stage calculations, do not round intermediate stage calculation results; keep as many digits as is practical (at least one more digit than the rounding rule allows per stage) until the end of all the calculations to avoid cumulative rounding errors while tracking or recording the significant figures in each intermediate result. Then, round the final result, for example, to the fewest number of significant figures (for multiplication or division) or leftmost last significant digit position (for addition or subtraction) among the inputs in the final calculation.

  • (2.3494 + 1.345) × 1.2 = 3.6944 × 1.2 = 4.43328 ≈ 4.4.
  • (2.3494 × 1.345) + 1.2 = 3.159943 + 1.2 = 4.359943 ≈ 4.4.

Estimating an extra digit

When using a ruler, initially use the smallest mark as the first estimated digit. For example, if a ruler's smallest mark is 0.1 cm, and 4.5 cm is read, then it is 4.5 (±0.1 cm) or 4.4 cm to 4.6 cm as to the smallest mark interval. However, in practice a measurement can usually be estimated by eye to closer than the interval between the ruler's smallest mark, e.g. in the above case it might be estimated as between 4.51 cm and 4.53 cm.

It is also possible that the overall length of a ruler may not be accurate to the degree of the smallest mark, and the marks may be imperfectly spaced within each unit. However assuming a normal good quality ruler, it should be possible to estimate tenths between the nearest two marks to achieve an extra decimal place of accuracy. Failing to do this adds the error in reading the ruler to any error in the calibration of the ruler.

Estimation in statistic

When estimating the proportion of individuals carrying some particular characteristic in a population, from a random sample of that population, the number of significant figures should not exceed the maximum precision allowed by that sample size.

Relationship to accuracy and precision in measurement

Traditionally, in various technical fields, "accuracy" refers to the closeness of a given measurement to its true value; "precision" refers to the stability of that measurement when repeated many times. Thus, it is possible to be "precisely wrong". Hoping to reflect the way in which the term "accuracy" is actually used in the scientific community, there is a recent standard, ISO 5725, which keeps the same definition of precision but defines the term "trueness" as the closeness of a given measurement to its true value and uses the term "accuracy" as the combination of trueness and precision. (See the accuracy and precision article for a full discussion.) In either case, the number of significant figures roughly corresponds to precision, not to accuracy or the newer concept of trueness.

In computing

Computer representations of floating-point numbers use a form of rounding to significant figures (while usually not keeping track of how many), in general with binary numbers. The number of correct significant figures is closely related to the notion of relative error (which has the advantage of being a more accurate measure of precision, and is independent of the radix, also known as the base, of the number system used).

Wednesday, September 20, 2023

Blue Brain Project

From Wikipedia, the free encyclopedia

The Blue Brain Project is a Swiss brain research initiative that aims to create a digital reconstruction of the mouse brain. The project was founded in May 2005 by the Brain and Mind Institute of École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. Its mission is to use biologically-detailed digital reconstructions and simulations of the mammalian brain to identify the fundamental principles of brain structure and function.

The project is headed by the founding director Henry Markram—who also launched the European Human Brain Project—and is co-directed by Felix Schürmann, Adriana Salvatore and Sean Hill. Using a Blue Gene supercomputer running Michael Hines's NEURON, the simulation involves a biologically realistic model of neurons and an empirically reconstructed model connectome.

There are a number of collaborations, including the Cajal Blue Brain, which is coordinated by the Supercomputing and Visualization Center of Madrid (CeSViMa), and others run by universities and independent laboratories.

Goal

The initial goal of the project, which was completed in December 2006, was the creation of a simulated rat neocortical column, which is considered by some researchers to be the smallest functional unit of the neocortex, which is thought to be responsible for higher functions such as conscious thought. In humans, each column is about 2 mm (0.079 in) in length, has a diameter of 0.5 mm (0.020 in) and contains about 60,000 neurons. Rat neocortical columns are very similar in structure but contain only 10,000 neurons and 108 synapses. Between 1995 and 2005, Markram mapped the types of neurons and their connections in such a column.

Progress

By 2005, the first cellular model was completed. The first artificial cellular neocortical column of 10,000 cells was built by 2008. By July 2011, a cellular mesocircuit of 100 neocortical columns with a million cells in total was built. A cellular rat brain had been planned for 2014 with 100 mesocircuits totalling a hundred million cells. A cellular human brain equivalent to 1,000 rat brains with a total of a hundred billion cells has been predicted to be possible by 2023.

In November 2007, the project reported the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column.

In 2015, scientists at École Polytechnique Fédérale de Lausanne (EPFL) developed a quantitative model of the previously unknown relationship between the glial cell astrocytes and neurons. This model describes the energy management of the brain through the function of the neuro-glial vascular unit (NGV). The additional layer of neuron-glial cells is being added to Blue Brain Project models to improve functionality of the system.

In 2017, Blue Brain Project discovered that neural cliques connected to one another in up to eleven dimensions. The project's director suggested that the difficulty of understanding the brain is partly because the mathematics usually applied for studying neural networks cannot detect that many dimensions. The Blue Brain Project was able to model these networks using algebraic topology.

In 2018, Blue Brain Project released its first digital 3D brain cell atlas which, according to ScienceDaily, is like "going from hand-drawn maps to Google Earth", providing information about major cell types, numbers, and positions in 737 regions of the brain.

In 2019, Idan Segev, one of the computational neuroscientists working on the Blue Brain Project, gave a talk titled: "Brain in the computer: what did I learn from simulating the brain." In his talk, he mentioned that the whole cortex for the mouse brain was complete and virtual EEG experiments would begin soon. He also mentioned that the model had become too heavy on the supercomputers they were using at the time, and that they were consequently exploring methods in which every neuron could be represented as an artificial neural network (see citation for details).

In 2022, scientists at the Blue Brain Project used algebraic topology to create an algorithm, Topological Neuronal Synthesis, that generates a large number of unique cells using only a few examples, synthesizing millions of unique neuronal morphologies. This allows them to replicate both healthy and diseased states of the brain. In a paper Kenari et al. were able to digitally synthesize dendritic morphologies from the mouse brain using this algorithm. They mapped entire brain regions from just a few reference cells. Since it is open source, this will enable the modelling of brain diseases and eventually, the algorithm could lead to digital twins of brains.

Software

The Blue Brain Project has developed a number of software to reconstruct and to simulate the mouse brain.

Blue Brain Nexus

Blue Brain Nexus is a data integration platform which uses a knowledge graph to enable users to search, deposit, and organise data. It stands on the FAIR data principles to provide flexible data management solutions beyond neuroscience studies. It is an open source software and available for everyone on GitHub.

BluePyOpt

BluePyOpt is a tool that is used to build electrical models of single neurons. For this, it uses evolutionary algorithms to constrain the parameters to experimental electrophysiological data. Attempts to reconstruct single neurons using BluePyOpt are reported by Rosanna Migliore, and Stefano Masori. It is an open source software and available for everyone on GitHub.

CoreNEURON

CoreNEURON is a supplemental tool to NEURON, which allows large scale simulation by boosting memory usage and computational speed. It is an open source software and available for everyone on GitHub.

NeuroMorphoVis

NeuroMorphoVis is a visualisation tool for morphologies of neurons. It is an open source software and available for everyone on GitHub.

SONATA

SONATA is a joint effort between Blue Brain Project and Allen Institute for Brain Science, to develop a standard for data format, which realises a multiple platform working environment with greater computational memory and efficiency. It is an open source software and available for everyone on GitHub.

Funding

The project is funded primarily by the Swiss government and the Future and Emerging Technologies (FET) Flagship grant from the European Commission, and secondarily by grants and donations from private individuals. The EPFL bought the Blue Gene computer at a reduced cost because it was still a prototype and IBM was interested in exploring how applications would perform on the machine. BBP was viewed as a validation of the Blue Gene supercomputer concept.

Criticisms

The management of the Blue Brain Project has undeniably missed the excessively ambitious targets it set itself in 2013.

Voices were raised as early as September 2014 to criticize the management by the project's key promoter, Professor Henry Makram, as well as the carelessness of the Brussels authorities who funded the project.

Prof. Markram is eventually removed from the leadership in 2016.

Related projects

Cajal Blue Brain

Cajal Blue Brain used the Magerit supercomputer (CeSViMa)

The Cajal Blue Brain Project is coordinated by the Technical University of Madrid led by Javier de Felipe and uses the facilities of the Supercomputing and Visualization Center of Madrid and its supercomputer Magerit. The Cajal Institute also participates in this collaboration. The main lines of research currently being pursued at Cajal Blue Brain include neurological experimentation and computer simulations. Nanotechnology, in the form of a newly designed brain microscope, plays an important role in its research plans.

Documentary

Noah Hutton created the documentary film In Silico over a 10-year period. The film was released in April 2021. The film covers the "shifting goals and landmarks" of the Blue Brain Project as well as the drama, "In the end, this isn’t about science. It’s about the universals of power, greed, ego, and fame."

Scientific notation

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Scientific_notation

Scientific notation is a way of expressing numbers that are too large or too small to be conveniently written in decimal form, since to do so would require writing out an inconveniently long string of digits. It may be referred to as scientific form or standard index form, or standard form in the United Kingdom. This base ten notation is commonly used by scientists, mathematicians, and engineers, in part because it can simplify certain arithmetic operations. On scientific calculators it is usually known as "SCI" display mode.

Decimal notation Scientific notation
2 2×100
300 3×102
4321.768 4.321768×103
−53000 −5.3×104
6720000000 6.72×109
0.2 2×10−1
987 9.87×102
0.00000000751 7.51×10−9

In scientific notation, nonzero numbers are written in the form

m × 10n

or m times ten raised to the power of n, where n is an integer, and the coefficient m is a nonzero real number (usually between 1 and 10 in absolute value, and nearly always written as a terminating decimal). The integer n is called the exponent and the real number m is called the significand or mantissa. The term "mantissa" can be ambiguous where logarithms are involved, because it is also the traditional name of the fractional part of the common logarithm. If the number is negative then a minus sign precedes m, as in ordinary decimal notation. In normalized notation, the exponent is chosen so that the absolute value (modulus) of the significand m is at least 1 but less than 10.

Decimal floating point is a computer arithmetic system closely related to scientific notation.

History

Normalized notation

Any given real number can be written in the form m×10n in many ways: for example, 350 can be written as 3.5×102 or 35×101 or 350×100.

In normalized scientific notation (called "standard form" in the United Kingdom), the exponent n is chosen so that the absolute value of m remains at least one but less than ten (1 ≤ |m| < 10). Thus 350 is written as 3.5×102. This form allows easy comparison of numbers: numbers with bigger exponents are (due to the normalization) larger than those with smaller exponents, and subtraction of exponents gives an estimate of the number of orders of magnitude separating the numbers. It is also the form that is required when using tables of common logarithms. In normalized notation, the exponent n is negative for a number with absolute value between 0 and 1 (e.g. 0.5 is written as 5×10−1). The 10 and exponent are often omitted when the exponent is 0. Howewver if there is a series of numbers, which need to be compared (or maybe added or subtracted), it is often convenient to use the same value of m for all of them.

Normalized scientific form is the typical form of expression of large numbers in many fields, unless an unnormalized or differently normalized form, such as engineering notation, is desired. Normalized scientific notation is often called exponential notation—although the latter term is more general and also applies when m is not restricted to the range 1 to 10 (as in engineering notation for instance) and to bases other than 10 (for example, 3.15×220).

Engineering notation

Engineering notation (often named "ENG" on scientific calculators) differs from normalized scientific notation in that the exponent n is restricted to multiples of 3. Consequently, the absolute value of m is in the range 1 ≤ |m| < 1000, rather than 1 ≤ |m| < 10. Though similar in concept, engineering notation is rarely called scientific notation. Engineering notation allows the numbers to explicitly match their corresponding SI prefixes, which facilitates reading and oral communication. For example, 12.5×10−9 m can be read as "twelve-point-five nanometres" and written as 12.5 nm, while its scientific notation equivalent 1.25×10−8 m would likely be read out as "one-point-two-five times ten-to-the-negative-eight metres".

Significant figures

A significant figure is a digit in a number that adds to its precision. This includes all nonzero numbers, zeroes between significant digits, and zeroes indicated to be significant. Leading and trailing zeroes are not significant digits, because they exist only to show the scale of the number. Unfortunately, this leads to ambiguity. The number 1230400 is usually read to have five significant figures: 1, 2, 3, 0, and 4, the final two zeroes serving only as placeholders and adding no precision. The same number, however, would be used if the last two digits were also measured precisely and found to equal 0 — seven significant figures.

When a number is converted into normalized scientific notation, it is scaled down to a number between 1 and 10. All of the significant digits remain, but the placeholding zeroes are no longer required. Thus 1230400 would become 1.2304×106 if it had five significant digits. If the number were known to six or seven significant figures, it would be shown as 1.23040×106 or 1.230400×106. Thus, an additional advantage of scientific notation is that the number of significant figures is unambiguous.

Estimated final digits

It is customary in scientific measurement to record all the definitely known digits from the measurement and to estimate at least one additional digit if there is any information at all available on its value. The resulting number contains more information than it would without the extra digit, which may be considered a significant digit because it conveys some information leading to greater precision in measurements and in aggregations of measurements (adding them or multiplying them together).

Additional information about precision can be conveyed through additional notation. It is often useful to know how exact the final digit or digits are. For instance, the accepted value of the mass of the proton can properly be expressed as 1.67262192369(51)×10−27 kg, which is shorthand for (1.67262192369±0.00000000051)×10−27 kg. However it is still unclear whether the error (5.1 × 10−37 in this case) is the maximum possible error or the standard deviation.

E notation

A Texas Instruments TI-84 Plus calculator display showing the Avogadro constant in E notation

Most calculators and many computer programs present very large and very small results in scientific notation, typically invoked by a key labelled EXP (for exponent), EEX (for enter exponent), EE, EX, E, or ×10x depending on vendor and model. Because superscripted exponents like 107 cannot always be conveniently displayed, the letter E (or e) is often used to represent "times ten raised to the power of" (which would be written as "× 10n") and is followed by the value of the exponent; in other words, for any real number m and integer n, the usage of "mEn" would indicate a value of m × 10n. In this usage the character e is not related to the mathematical constant e or the exponential function ex (a confusion that is unlikely if scientific notation is represented by a capital E). Although the E stands for exponent, the notation is usually referred to as (scientific) E notation rather than (scientific) exponential notation. The use of E notation facilitates data entry and readability in textual communication since it minimizes keystrokes, avoids reduced font sizes and provides a simpler and more concise display, but it is not encouraged in some publications.

Examples and other notations

  • Since its first version released for the IBM 704 in 1956, the Fortran language has used E notation for floating point numbers. It was not part of the preliminary specification as of 1954.
  • The E notation was already used by the developers of SHARE Operating System (SOS) for the IBM 709 in 1958.
  • In most popular programming languages, 6.022E23 (or 6.022e23) is equivalent to 6.022×1023, and 1.6×10−35 would be written 1.6E-35 (e.g. Ada, Analytica, C/C++, Fortran, MATLAB, Scilab, Perl, Java, Python, Lua, JavaScript, and others).
  • After the introduction of the first pocket calculators supporting scientific notation in 1972 (HP-35, SR-10) the term decapower was sometimes used in the emerging user communities for the power-of-ten multiplier in order to better distinguish it from "normal" exponents. Likewise, the letter "D" was used in typewritten numbers. This notation was proposed by Jim Davidson and published in the January 1976 issue of Richard J. Nelson's Hewlett-Packard newsletter 65 Notes for HP-65 users, and it was adopted and carried over into the Texas Instruments community by Richard C. Vanderburgh, the editor of the 52-Notes newsletter for SR-52 users in November 1976.
  • The displays of LED pocket calculators did not display an "E" or "e". Instead, one or more digits were left blank between the mantissa and exponent (e.g. 6.022 23, such as in the Hewlett-Packard HP-25), or a pair of smaller and slightly raised digits reserved for the exponent was used (e.g. 6.022 23, such as in the Commodore PR100).
  • Fortran (at least since FORTRAN IV as of 1961) also uses "D" to signify double precision numbers in scientific notation.
  • Similar, a "D" was used by Sharp pocket computers PC-1280, PC-1470U, PC-1475, PC-1480U, PC-1490U, PC-1490UII, PC-E500, PC-E500S, PC-E550, PC-E650 and PC-U6000 to indicate 20-digit double-precision numbers in scientific notation in BASIC between 1987 and 1995.
  • Some newer FORTRAN compilers like DEC FORTRAN 77 (f77), Intel Fortran, Compaq/Digital Visual Fortran or GNU Fortran (gfortran) support "Q" to signify quadruple precision numbers in scientific notation.
  • MATLAB supports both letters, "E" and "D", to indicate numbers in scientific notation.
  • The ALGOL 60 (1960) programming language uses a subscript ten "10" character instead of the letter E, for example: 6.0221023.
  • The use of the "10" in the various Algol standards provided a challenge on some computer systems that did not provide such a "10" character. As a consequence Stanford University Algol-W required the use of a single quote, e.g. 6.022'+23, and some Soviet Algol variants allowed the use of the Cyrillic character "ю" character, e.g. 6.022ю+23.
  • Subsequently, the ALGOL 68 programming language provided the choice of 4 characters: E, e, \, or 10. By examples: 6.022E23, 6.022e23, 6.022\23 or 6.0221023.
  • Decimal Exponent Symbol is part of the Unicode Standard, e.g. 6.022⏨23. It is included as U+23E8 DECIMAL EXPONENT SYMBOL to accommodate usage in the programming languages Algol 60 and Algol 68.
  • in 1962, Ronald O. Whitaker of Rowco Engineering Co. proposed a power-of-ten system nomenclature where the exponent would be circled, e.g. 6.022 × 103 would be written as "6.022③".
  • The TI-83 series and TI-84 Plus series of calculators use a stylized E character to display decimal exponent and the 10 character to denote an equivalent ×10^ operator.
  • The Simula programming language requires the use of & (or && for long), for example: 6.022&23 (or 6.022&&23).
  • The Wolfram Language (utilized in Mathematica) allows a shorthand notation of 6.022*^23. (Instead, E denotes the mathematical constant e).

Use of spaces

In normalized scientific notation, in E notation, and in engineering notation, the space (which in typesetting may be represented by a normal width space or a thin space) that is allowed only before and after "×" or in front of "E" is sometimes omitted, though it is less common to do so before the alphabetical character.

Further examples of scientific notation

  • An electron's mass is about 0.000000000000000000000000000000910938356 kg. In scientific notation, this is written 9.10938356×10−31 kg (in SI units).
  • The Earth's mass is about 5972400000000000000000000 kg. In scientific notation, this is written 5.9724×1024 kg.
  • The Earth's circumference is approximately 40000000 m. In scientific notation, this is 4×107 m. In engineering notation, this is written 40×106 m. In SI writing style, this may be written 40 Mm (40 megametres).
  • An inch is defined as exactly 25.4 mm. Quoting a value of 25.400 mm shows that the value is correct to the nearest micrometre. An approximated value with only two significant digits would be 2.5×101 mm instead. As there is no limit to the number of significant digits, the length of an inch could, if required, be written as (say) 2.54000000000×101 mm instead.
  • Hyperinflation is a problem that is caused when too much money is printed with regards to there being too few commodities, causing the inflation rate to rise by 50% or more in a single month; currencies tend to lose their intrinsic value over time. Some countries have had an inflation rate of 1 million percent or more in a single month, which usually results in the abandonment of the country's currency shortly afterwards. In November 2008, the monthly inflation rate of the Zimbabwean dollar reached 79.6 billion percent; the approximated value with three significant figures would be 7.96×1010 percent.

Converting numbers

Converting a number in these cases means to either convert the number into scientific notation form, convert it back into decimal form or to change the exponent part of the equation. None of these alter the actual number, only how it's expressed.

Decimal to scientific

First, move the decimal separator point sufficient places, n, to put the number's value within a desired range, between 1 and 10 for normalized notation. If the decimal was moved to the left, append × 10n; to the right, × 10−n. To represent the number 1,230,400 in normalized scientific notation, the decimal separator would be moved 6 digits to the left and × 106 appended, resulting in 1.2304×106. The number −0.0040321 would have its decimal separator shifted 3 digits to the right instead of the left and yield −4.0321×10−3 as a result.

Scientific to decimal

Converting a number from scientific notation to decimal notation, first remove the × 10n on the end, then shift the decimal separator n digits to the right (positive n) or left (negative n). The number 1.2304×106 would have its decimal separator shifted 6 digits to the right and become 1,230,400, while −4.0321×10−3 would have its decimal separator moved 3 digits to the left and be −0.0040321.

Exponential

Conversion between different scientific notation representations of the same number with different exponential values is achieved by performing opposite operations of multiplication or division by a power of ten on the significand and an subtraction or addition of one on the exponent part. The decimal separator in the significand is shifted x places to the left (or right) and x is added to (or subtracted from) the exponent, as shown below.

1.234×103 = 12.34×102 = 123.4×101 = 1234

Basic operations

Given two numbers in scientific notation,

and

Multiplication and division are performed using the rules for operation with exponentiation:

and

Some examples are:

and

Addition and subtraction require the numbers to be represented using the same exponential part, so that the significand can be simply added or subtracted:

and with

Next, add or subtract the significands:

An example:

Other bases

While base ten is normally used for scientific notation, powers of other bases can be used too, base 2 being the next most commonly used one.

For example, in base-2 scientific notation, the number 1001b in binary (=9d) is written as 1.001b × 2d11b or 1.001b × 10b11b using binary numbers (or shorter 1.001 × 1011 if binary context is obvious). In E notation, this is written as 1.001bE11b (or shorter: 1.001E11) with the letter E now standing for "times two (10b) to the power" here. In order to better distinguish this base-2 exponent from a base-10 exponent, a base-2 exponent is sometimes also indicated by using the letter B instead of E, a shorthand notation originally proposed by Bruce Alan Martin of Brookhaven National Laboratory in 1968, as in 1.001bB11b (or shorter: 1.001B11). For comparison, the same number in decimal representation: 1.125 × 23 (using decimal representation), or 1.125B3 (still using decimal representation). Some calculators use a mixed representation for binary floating point numbers, where the exponent is displayed as decimal number even in binary mode, so the above becomes 1.001b × 10b3d or shorter 1.001B3.

This is closely related to the base-2 floating-point representation commonly used in computer arithmetic, and the usage of IEC binary prefixes (e.g. 1B10 for 1×210 (kibi), 1B20 for 1×220 (mebi), 1B30 for 1×230 (gibi), 1B40 for 1×240 (tebi)).

Similar to B (or b), the letters H (or h) and O (or o, or C) are sometimes also used to indicate times 16 or 8 to the power as in 1.25 = 1.40h × 10h0h = 1.40H0 = 1.40h0, or 98000 = 2.7732o × 10o5o = 2.7732o5 = 2.7732C5.

Another similar convention to denote base-2 exponents is using a letter P (or p, for "power"). In this notation the significand is always meant to be hexadecimal, whereas the exponent is always meant to be decimal. This notation can be produced by implementations of the printf family of functions following the C99 specification and (Single Unix Specification) IEEE Std 1003.1 POSIX standard, when using the %a or %A conversion specifiers. Starting with C++11, C++ I/O functions could parse and print the P notation as well. Meanwhile, the notation has been fully adopted by the language standard since C++17. Apple's Swift supports it as well. It is also required by the IEEE 754-2008 binary floating-point standard. Example: 1.3DEp42 represents 1.3DEh × 242.

Engineering notation can be viewed as a base-1000 scientific notation.

Neocortex

From Wikipedia, the free encyclopedia
Neocortex
A representative column of neocortex. Cell body layers are labeled on the left, and fiber layers are labeled on the right.

The neocortex, also called the neopallium, isocortex, or the six-layered cortex, is a set of layers of the mammalian cerebral cortex involved in higher-order brain functions such as sensory perception, cognition, generation of motor commands, spatial reasoning and language. The neocortex is further subdivided into the true isocortex and the proisocortex.

In the human brain, the cerebral cortex consists of the larger neocortex and the smaller allocortex. The neocortex is made up of six layers, labelled from the outermost inwards, I to VI.

Etymology

The term is from cortex, Latin, "bark" or "rind", combined with neo-, Greek, "new". Neopallium is a similar hybrid, from Latin pallium, "cloak". Isocortex and allocortex are hybrids with Greek isos, "same", and allos, "other".

Anatomy

The neocortex is the most developed in its organisation and number of layers, of the cerebral tissues. The neocortex consists of the grey matter, or neuronal cell bodies and unmyelinated fibers, surrounding the deeper white matter (myelinated axons) in the cerebrum. This is a very thin layer though, about 2–4 mm thick. There are two types of cortex in the neocortex, the proisocortex and the true isocortex. The pro-isocortex is a transitional area between the true isocortex and the periallocortex (part of the allocortex). It is found in the cingulate cortex (part of the limbic system), in Brodmann's areas 24, 25, 30 and 32, the insula and the parahippocampal gyrus.

Of all the mammals studied to date (including humans), a species of oceanic dolphin known as the long-finned pilot whale has been found to have the most neocortical neurons.

Geometry

The neocortex is smooth in rodents and other small mammals, whereas in elephants, dolphins and primates and other larger mammals it has deep grooves (sulci) and ridges (gyri). These folds allow the surface area of the neocortex to be greatly increased. All human brains have the same overall pattern of main gyri and sulci, although they differ in detail from one person to another. The mechanism by which the gyri form during embryogenesis is not entirely clear, and there are several competing hypotheses that explain gyrification, such as axonal tension, cortical buckling or differences in cellular proliferation rates in different areas of the cortex.

Layers

The neocortex contains both excitatory (~80%) and inhibitory (~20%) neurons, named for their effect on other neurons. The human neocortex consists of hundreds of different types of cells. The structure of the neocortex is relatively uniform (hence the alternative names "iso-" and "homotypic" cortex), consisting of six horizontal layers segregated principally by cell type and neuronal connections. However, there are many exceptions to this uniformity; for example, layer IV is small or missing in the primary motor cortex. There is some canonical circuitry within the cortex; for example, pyramidal neurons in the upper layers II and III project their axons to other areas of neocortex, while those in the deeper layers V and VI often project out of the cortex, e.g. to the thalamus, brainstem, and spinal cord. Neurons in layer IV receive the majority of the synaptic connections from outside the cortex (mostly from thalamus), and themselves make short-range, local connections to other cortical layers. Thus, layer IV is the main recipient of incoming sensory information and distributes it to the other layers for further processing.

Cortical columns

The neocortex is often described as being arranged in vertical structures called cortical columns, patches of neocortex with a diameter of roughly 0.5 mm (and a depth of 2 mm, i.e., spanning all six layers). These columns are often thought of as the basic repeating functional units of the neocortex, but their many definitions, in terms of anatomy, size, or function, are generally not consistent with each other, leading to a lack of consensus regarding their structure or function or even whether it makes sense to try to understand neocortex in terms of columns.

Function

The neocortex is derived embryonically from the dorsal telencephalon, which is the rostral part of the forebrain. The neocortex is divided, into regions demarcated by the cranial sutures in the skull above, into frontal, parietal, occipital, and temporal lobes, which perform different functions. For example, the occipital lobe contains the primary visual cortex, and the temporal lobe contains the primary auditory cortex. Further subdivisions or areas of neocortex are responsible for more specific cognitive processes. In humans, the frontal lobe contains areas devoted to abilities that are enhanced in or unique to our species, such as complex language processing localized to the ventrolateral prefrontal cortex (Broca's area). In humans and other primates, social and emotional processing is localized to the orbitofrontal cortex.

The neocortex has also been shown to play an influential role in sleep, memory and learning processes. Semantic memories appear to be stored in the neocortex, specifically the anterolateral temporal lobe of the neocortex. It is also involved in instrumental conditioning; responsible for transmitting sensory information and information about plans for movement to the basal ganglia. The firing rate of neurons in the neocortex also has an effect on slow-wave sleep. When the neurons are at rest and are hyperpolarizing, a period of inhibition occurs during a slow oscillation, called the down state. When the neurons of the neocortex are in the excitatory depolarizing phase and are firing briefly at a high rate, a period of excitation occurs during a slow oscillation, called the up state.

Clinical significance

Lesions that develop in neurodegenerative disorders, such as Alzheimer's disease, interrupt the transfer of information from the sensory neocortex to the prefrontal neocortex. This disruption of sensory information contributes to the progressive symptoms seen in neurodegenerative disorders such as changes in personality, decline in cognitive abilities, and dementia. Damage to the neocortex of the anterolateral temporal lobe results in semantic dementia, which is the loss of memory of factual information (semantic memories). These symptoms can also be replicated by transcranial magnetic stimulation of this area. If damage is sustained to this area, patients do not develop anterograde amnesia and are able to recall episodic information.

Evolution

The neocortex is the newest part of the cerebral cortex to evolve (hence the prefix neo meaning new); the other part of the cerebral cortex is the allocortex. The cellular organization of the allocortex is different from the six-layered neocortex. In humans, 90% of the cerebral cortex and 76% of the entire brain is neocortex.

For a species to develop a larger neocortex, the brain must evolve in size so that it is large enough to support the region. Body size, basal metabolic rate and life history are factors affecting brain evolution and the coevolution of neocortex size and group size. The neocortex increased in size in response to pressures for greater cooperation and competition in early ancestors. With the size increase, there was greater voluntary inhibitory control of social behaviors resulting in increased social harmony.

The six-layer cortex appears to be a distinguishing feature of mammals; it has been found in the brains of all mammals, but not in any other animals. There is some debate, however, as to the cross-species nomenclature for neocortex. In avians, for instance, there are clear examples of cognitive processes that are thought to be neocortical in nature, despite the lack of the distinctive six-layer neocortical structure. In a similar manner, reptiles, such as turtles, have primary sensory cortices. A consistent, alternative name has yet to be agreed upon.

Neocortex ratio

The neocortex ratio of a species is the ratio of the size of the neocortex to the rest of the brain. A high neocortex ratio is thought to correlate with a number of social variables such as group size and the complexity of social mating behaviors. Humans have a large neocortex as a percentage of total brain matter when compared with other mammals. For example, there is only a 30:1 ratio of neocortical gray matter to the size of the medulla oblongata in the brainstem of chimpanzees, while the ratio is 60:1 in humans.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...