^{1}

^{1}

^{2}

^{*}

^{1}

^{1}

^{2}

Edited by: Sladjana Z. Spasić, University of Belgrade, Serbia

Reviewed by: Fred Hasselman, Radboud University Nijmegen, Netherlands; Zoran M. Nikolic, University of Belgrade, Serbia; Ion Andronache, University of Bucharest, Romania

This article was submitted to Fractal Physiology, a section of the journal Frontiers in Physiology

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

The concept of biological signals is becoming broader. Some of the challenges are: searching for inner and structural characteristics; selecting appropriate modeling to enhance perceived properties in the signals; extracting the representative components, identifying their mathematical correspondents; and performing necessary transformations in order to obtain form for subtle analysis, comparisons, derived recognition, and classification. There is that unique moment when we correspond the adequate mathematical structures to the observed phenomena. It allows application of various mathematical constructs, transformations and reconstructions. Finally, comparisons and classifications of the newly observed phenomena often lead to enrichment of the existing models with some additional structurality. For a specialized context the modeling takes place in a suitable set of mathematical representations of the same kind, a set of models M, where the mentioned transformations take place. They are used for determination of structures

A biological signal is any mapping (change) of a biological quantity/content into the corresponding set (codomain), with the purpose to represent the particular process in a form suitable for studying, monitoring, determination of functional connections (relations, dependence) between the studied quantity and its relevant constituents.

The change in biological quantities can have particular significance and lead to discovering of deep processes hidden from the direct (e.g., visual) observations. Long ago, it was discovered that biological organisms function through a sequence of interconnected processes-results of action of systems and subsystems within a hierarchically organized functions.

Hence, it is prudent to formally define a biological signal as a function of the form

Certain biological phenomena such as body temperature or blood pressure are analog. The number of erythrocytes, or the number of bacteria per space unit are examples of digital signals, but with a large number of units. The corresponding measuring procedures are designed to obtain the suitable approximations within some finite scale. For example, the body temperature of a living human is scaled by the degrees in Celsius, with min = 35°

On the other hand, measurements are performed in order to assess the presence or the absence of a property/pathology. As such, they are composed with the additional binary scale (absence, presence), ternary scale (absence, presence, strong presence), and so on. Some phenomena require more complex structures involving indications of inner dependences, usually represented by multigraphs.

Regardless of the form of the performed measurements, the modern computers are at such technical level that allows implementation of various numerical and symbolic algorithms related to acquisition, representation, analysis and transformation/manipulation of biological signals. Hence, the modern representation of biological signals use mathematical structures (numerical or abstract) suitable for digitization, exact representation, deeper insights and finally, classification. In the very rich variety of biological signals, here we focus on some mathematical representation and operation aspects involving broad range of applications, thus illustrating the rich abundance of phenomena and their mathematical treatment, rather than trying to have more complete approach, which needs much larger space and more complex method coverage.

Automated acquisition and processing of biological signals has opened the possibility of elimination of subjectivity in validation and interpretation of a measurement. At the same time, digitization has enabled application of the large mathematical apparatus, making possible nontrivial transformations of the initial content. The large number of scientific breakthroughs that are made in this way has established the new, highly prominent scientific discipline involving broad mathematical modeling and their computer implementations.

Developing some systems for operation with biological signals, in our group GIS (Group for Intelligent Systems), we have implemented systems for digital upgrades of the existing analog research and clinical equipment for the measurement of e.g., arterial pressure, ECG, EEG, specific neurology, ultrasound, NMR, and digital microscopy signals. Those systems have enabled digital acquisition of the various types of related signals, including biometric parameters like voice and fingerprints, acquisition of various molecular biology signals like chromosomes and genetic sequences. We have also implemented tools for representation, visualization, manipulation and transformation of signals and integrated it with the CCD computerized microscopy.

In particular, developed software solutions include: signal monitoring, acquisition and real time analysis (the first version was implemented in 1994); image acquisition and analysis (1994); image spectroscopy (1995), photomorphology (1995–1998), color combine fluorescent microscopy (1997–1998); automatized karyotyping involving object recognition, normalization, and classification (1997).

As mentioned above, before implementation, all measurements and analyses were performed manually by direct observation. The improvement in efficiency and precision was immediately observed by the involved researches. Developed solutions have been in use for almost two decades at more than 20 research laboratories at the University of Belgrade, Lomonosov state university at Moscow, and UC Berkeley, see (Jovanović,

In addition, we have also developed hardware for those laboratories including CCD microscopes, computerized EEG, ECG, CTG, acoustic RT spectroscopes, equipment for recording of magnetic field attenuation etc. (see Jovanović,

Those systems have enabled precise measurements, significant reduction of errors previously made by subjective visual detection of important features, nontrivial numerical, algebraic, geometrical, topological, and visual transformations of the acquired signals and integration with other related computerized systems. In particular, images displayed at Figures

A mammal acoustic signal.

Arterial pressure signals with implanted transducers (rats) (Jovanović,

Garden snail neuron activity (Kesić et al.,

Mitosis-two chromosome distributions (Jovanović,

RNA dots related to the neuron nucleus (rats) (Jovanović,

FISH signal of the same preparation in two different wavelengths (Jovanović,

Gel used in molecular biology (Jovanović,

Integral computation.

In last few decades we are witnessing impressive developments of technologies and methods implemented in biological signals. More powerful instrument perception is progressing together with more powerful and more sophisticated methods.

A biological signal, coded in computer as a digital function, is usually a finite approximation of an analog signal. Consequently, sampling resolution should be sufficient in order to provide a quality acquisition, enabling detection, extraction, recognition, and normalization of important features in signals and adequate comparison with etalons. Moreover, the successful implementation of mentioned procedures can be further enriched to fully or highly automated systems for classification, reasoning, and decision making. This aim is the essential improvement of the previously achieved insights.

The older (we can say classical) methods, that are usually simple, do not necessarily lead to simplifications, though they are often ballasted with certain semantic limits. On the other hand, the more modern and sophisticated methods do not necessarily improve our knowledge. In the case of careless application, they might lead to false understanding with broader poor consequences. Some of issues related to methods for computation of Granger causality were discussed in (Kasum et al.,

The application of Mathematics and Statistics require permanent criticism and scrutiny, especially in the points where these are connected to non-mathematical semantics. The proper mix of simple and complex modeling could offer substantial advantages.

The initial signal usually requires preprocessing involving different types of normalizations. The standard examples are:

Filtering of electrophysiological signals;

Filtering of microscopic optic signals and certain preprocessing operations, e.g., determination of contours of microscopic objects or their nonlinear transformations, or determination of contours of spectrogram features.

Discrete and continual counting measures normalized to the real unit interval are the most prominent measures present in expression of the observed statistical dependences, statistical analysis of the experimental data, probabilistic estimations on finite domains, or on more abstract mathematical structures.

Biomedical statistical analysis involves comparison with the control group, computation of the relevant statistics (e.g., mean, variance, correlation coefficient,

As a consequence of the integration, any additional information that the initial signal carry will be lost. □

for 0 ≤

Simple calculation of the mean and its semantics.

However, the above signal can be interpreted as a producing the equal length tunes _{1}, _{1}, _{1},

While dealing with simple signals, with simple changes in time, the direct simple representation/visualization is often satisfactory. However, the study of subtler details and processes, and integration of system insight, requires increased complexity. The modern research demands, with invisible important features, higher complexity in representations, and involved structures. This is the point of departure from the simple and simplest representations and measurements, thus opening room for more complex functions and structures and consequently, for more complex measures and operations on these structures. It is very difficult to determine what would be the upper bound for complexity of mathematical structures when dealing with biological signals. Especially now when everybody is aware that neurological signals are directly related to the processing of sensory information and system control in all variety of situations. As an illustration, we refer the reader to the concept of Granger causality that has been extensively used in neuroscience, see (Granger,

Moreover, and much more important, biological signals like DNA sequences are information bearing structures (even more, they are knowledge bases) and should be treated as such. The particular DNA molecule should be also studied by its set of consequences, not solely by its morphological properties. It seems prudent to involve the entire data science and a significant part of mathematical logic into foundations of biology.

For example, propositions “today is Tuesday” and “it is not true that today is not Tuesday” have the same meaning, but syntactically are quite different. In terms of Euclidean metrics (the main tool for the similarity estimation), they are quite distant. Thus, the syntactical similarity can be quite different form the more important, semantic similarity. The syntax similarity only works properly if applied on objects in normal form (a concept similar to disjunctive or conjunctive normal form in propositional logic).

Back to DNA, we may ask the following questions:

Is there a normal form of a DNA sequence?

If the answer is positive, are the DNA molecules always in the normal form?

What are the properties of the “gene to protein” relation?

Can we produce an axiom system and derivation rules (i.e., logic) for the synthesis of proteins?

It is not our attention to dispute the well-established use of the Hilbert space formalism in acquisition of biological signals. However, it cannot be the sole mathematical apparatus used in biology, since it offers nothing about consequence relations and deduction in general. It seems prudent to involve some other mathematical disciplines related to automated reasoning. For the reader unfamiliar with the basic concepts of mathematical logic we refer to (Mendelson,

This is why more complex methods are finding applications and are well emancipated in the processing of biological signals. Here we shortly summarize some elements with their relevant properties that are already in broader use.

Where there are measurements, immediately there are measures. The signal processing techniques involve application of different kind of measures: counting cardinality, probabilistic, vector valued (non-monotonic), common Euclidean geometry measures, special probabilistic Boolean ({0, 1}-valued) filters (those emerge in situations when deciding if an object has certain property or not) and so on. Usually, the sets occurring in experiments are fairly simple in the sense that they can be adequately approximated by finite sets, or by finite Boolean combinations of intervals and points. As such, they can be rather directly and easily measured. Original entities/objects are corresponded to their mathematical representations. Then obviously, a question arises: to what extent are the representations of a certain kind of entities similar/identical, which we resolve obviously with certain distance measurements-metrics between individual representations. Thus representations, no matter how simple or complex, become points in the space of representations and distance measurements directly determine similarity of originals.

However, one should always be aware of the underlying measure algebra, particularly when dealing with probability measures. The main cause of so called probability paradoxes is absence of the precise determination of the underlying measure algebra, i.e., the absence of the precise definition of the set of events that can be measured with the given probability function. For readers unfamiliar with the basic probabilistic concepts we refer textbooks (Attenborough,

One of the subjects of the contemporary research is the study of the impact of quantum phenomena on complex biological formations, starting from large molecules, to large systems like brain and related biological phenomena e.g., consciousness. Along this line has emerged the awareness of the necessity of precise description and understanding of signals that are more complex and structures, which leads to utilization of more complex sets (events) and measures on them.

An example of this kind would be determination of the geometric probability for the set with fractal or rather complex boundary. Fractals have become broadly present in Biology in representation of biological functions and characterization of their complexity. Functions are sets; events in a probability are sets.

Another example of more complex measures involves Boolean measures on the set of natural numbers ℕ induced by nontrivial filters and their total extensions.

The first measurements of the more complex curves and geometric objects were performed with the discovery and application of the infinitesimal calculus. The definite integral

Development of calculus has brought the methods for integration of more complex functions, e.g., functions with countably many jump discontinuities and functions with essential discontinuities. The abstract concept of an integral has been finally shaped with Lebesgue's theory of measure and integration.

Starting with the basic geometric measures arising from Euclidean metrics (length of a straight line, area of a rectangle, volume of a cube); the measure of more complex sets is determined by application of the σ−additivity property:

for pairwise disjoint sets _{n},

For example, let

Since

for

This was a significant improvement of the Riemann integral.

The modern understanding of a probability is as a normed measure on a probability space. More precisely, probability space is a triple

The additivity _{i}:

μ(_{i}) = 0 for all

Some examples:

Calculating area of the curved trapezium;

Calculating area of the figure whose boundary has finitely many stepped discontinuities;

Calculating area of the figure whose boundary has countably many stepped discontinuities;

Calculating geometric probability of the set with simple boundary;

Calculating geometric probability of the set with fractal boundary (e.g., Weierstrass function).

Note that

The notions of metrics and measure play important part in modeling of similarity. In the study of information bearing structures, most notably formal deductive systems, it is often easier to define measure than metrics. For example, a consistent propositional theory (set of formulas) _{T} on Lindenbaum algebra

Here

One of the most common ways to generate metrics from a given measure μ is to measure symmetric difference:

The obtained metrics

The most commonly known meaning of the notion of dimension is that it is the cardinal number of any basis of the given vector space. For example, dimension of the Euclidean space ℝ^{n} is, as expected, equal to

Another important concept of dimension is topological dimension. We shall omit a rather cumbersome technical definition, and try to illustrate the concept in the case of charts. A ^{n} for ^{n} of the form

where each _{i} ⊆ℝ is an interval and each _{i}:_{1} × … × _{k} → ℝ is a smooth function. For example, a sphere with radius

Generally, a plain curve can be intuitively described as the set of the form

where

Note that if

Higuchi fractal dimension procedure became popular with the expanding applications on biological, especially neurological signals. It has been used alone or in combination with other signal analysis techniques in the revealing complexity patterns in the single neuron activity as well as in EEG/ECoG signals that originate from complex neuronal networks in different physiological and pathophysiological conditions (Kesić and Spasić,

On Figure

Higuchi's fractal dimension (Kesić et al.,

Construction of Cantor set.

In 1918 Felix Hausdorff introduced a generalization of the notion of topological dimension in order to classify objects with fractal boundaries.

The connection between Hausdorff measure and Lebesgue measure is rather strong, as it is stated by the following theorem.

^{n} is a Borel set and that μ_{n} is the Lebesgue measure on ℝ^{n}. Then,

Now the Hausdorff dimension is defined by

A consequence Theorem 3.3.2 is the fact that topological dimension of any smooth manifold _{H}(

^{n};

dim_{H}([0, 1] × {0}) = 1.

The more interesting examples are related to various fractals.

The intuitive definition goes as follows:

Start with _{0} = [0, 1];

Remove the middle third from _{0}. More precisely,

Repeat the above procedure on each closed subinterval. For example,

The corresponding Hausdorff dimension of the Cantor set is given by

The Cantor comb is the set

Construction of Koch line.

This procedure is repeated ad infinitum. Similarly, Koch snowflake is constructed from the equilateral triangle by transformation of its edges into Koch lines, as shown on Figure

Construction of Koch snowflake.

The Hausdorff dimension of both Koch line and Koch snowflake is equal to log_{3}(4).

Construction of Sierpinski triangle.

Construction of Sierpinski carpet.

The Hausdorff dimension of Sierpinski triangle is equal to log_{2}(3), while the Hausdorff dimension of Sierpinski carpet is equal to log_{3}(8). □

One of the natural questions involving metric characteristics of a given subset of a metric space is to compare measures of sets and their boundaries. A motivation can be found in classical problems of finding a figure with fixed type of boundary (or fixed measure) with maximal or minimal area or volume. An example of this kind is finding a figure of maximal area whose boundary has the fixed length

^{n}. We define the boundary-interior index (BI) of

In the following examples we shall calculate BI for several important sets illustrating characteristic cases.

Moreover, circumference _{1}(∂

^{n} where

^{2} bounded with

and

Hence, bi(

On the other hand, let ^{3} that is formed by rotation of

and

Thus, bi(

_{n} be the figure obtained in the

and

Thus, _{0} is equal to

_{n} be the figure obtained in the

and

Thus, bi(_{0}.

In the case of Sierpinski carpet,

_{n} be the figure obtained in the _{1}(_{2}(

On the other hand, _{1}(^{2} of positive measure disjoint to _{2}(

On the other hand, boundary of Cantor comb contains

_{1}(

The case bi(_{n−1} (∂_{n}(_{n}(

With respect to objects in ℝ^{n} for _{n−1}(∂_{n−1}(∂

When there is a need to calculate energy under fractal curve, or further integrate it as with spectrograms, we immediately switch to the 2D or 3D objects with complex-fractal boundary.

The early image processing initiated the efficient algorithms to penetrate images (Haralick et al.,

Identification of the central meridian-line of a chromosome before normalization-“rectification,” the feature preparation for the metric-comparison (Jovanović,

Normalized chromosomal structure detail in chromosomal coordinate system (Jovanović,

3D representation of absorption in chromosomes; top left, non-normalized –example, the lowest chromosome from Figure

Chromosome measurement, comparison and classification (Jovanović,

FFT spectrogram as a part of the acoustic melody recognition (Jovanović,

Genetic content is well ordered within chromosomes, with individual genes located at specific positions, organizing chromosomal coordinate system. Chromosomal (karyotype) classification reached in importance since any change, small or smaller is related to most important life aspects of the studied organism.

The methods and techniques applied in these analyses are expanding at an accelerated rate. Besides karyotyping and its comparisons with the developing standards toward the localization and classification of the individual genes, identification of irregular chromosomes with backtracking of the genetic material forming them, as well as the localization of hardly perceptible (small) fractures and their extraction and further analysis, have been in the research focus (Jovanović et al.,

In the formation of microscopic preparations of chromosomes, they get bent forms randomly. The images (patterns) of light absorption correspond to the absorption intensity two-argument functions are 3D manifolds with characterizing distribution of convex and concave parts (dark and light segments).

The longitudinal distortions bending, unless negligible make direct geometric analysis and comparison hard or non-reasonable.

What initially remains is the investigation of algebraic and topologic invariants of the representing manifolds. Following with multitude of single chromosome shapes we are forced to operate with this representations collected into large sets which is a serious complication. In this preliminary part of chromosomal analysis we recommended a rather simple controlled normalization procedure, as follows (Jovanović et al.,

After the initial contour definition, we form the original chromosomal coordinate system with the orthogonal section lines on the central meridian line. This determines the initial geodesics and the corresponding metrics. By preserving of this central meridian in its original length, using Euclidean distance (which departs substantially from digital-pixel wise distance), rectifying it and positioning the orthogonal lines in the original points, we obtain the receiving Euclidean coordinate network (mesh). This mesh is used to map the original pixels into the receiving orthogonal mesh.

The inflections of the meridian will demand interpolation of pixels in the receiving network, and they correspond to the convex side. The concave-symmetric part will demand pixel fusions in the receiving image, which is the rectified chromosome. Such normalization is very suitable for applications of metrics in order to determine the degree of chromosome similarity with other compared chromosomes, leading rather straight to classification. Thus, rectifying-normalization is intended to produce image of the studied chromosome, as it would be if the chromosome did not have any inflections in the preparation production.

Clearly, smaller inflection enables more precise rectification of the particular chromosome. In cases when the inflection angle induces substantial detail damage, the rectification procedure can be frozen at each desirable angle, thus preserving important image sections, or, extend necessarily the chromosome length.

The alternative procedure is to generate narrow longitudinal bands concentric to the original curved meridian. Those bands should contain the smaller features that are undesirably distorted in the above normalization of the whole chromosome, and rectify only the selected narrow band. This approach will reduce the above disadvantage to negligible.

Once normalized, chromosomal images are well positioned over the simple rectangular domain. Obviously, the algebraic-topological invariants in the original chromosomes are now algebraic-geometric invariants, in the (almost) orthogonal chromosomal coordinate system.

In the early nineties, zooming the chromosome into the chip diagonal, we managed to obtain close to 100 k pixel per chromosome resolution. Now with pixels reduced hundredfold, the number of pixels per chromosome increases proportionally, offering high resolution orthogonal chromosomal systems. The consequence is significant improvement of accessible details within the observed genetic structures. Once when the real chromosome 3D high resolution images become reality, we will deal with the 3D chromosomal orthogonal cylindrical geometry, with appropriate metrics.

In this way, the original chromosome manifold _{i}(_{i}(

and

assuming that the central meridian is collinear with the _{i}.

Similarly form _{i} for the minimums. In this way we can use _{i} as a single simple chromosomal invariant and define measures on such representations which would indicate the level of chromosome similarity and provide general classification. Then for two representation vectors _{i} and _{j} we can define the metrics by

The alternative is to calculate the relative distances of nonzero coordinates of _{i} and _{j} and use these vectors in the metric (^{*}). For the alternative purposes we apply more or less refined metrics based on Euclidean metrics, e.g., less refined for global comparisons, more refined for detail inspections.

Earlier we defined some normalized and fuzzy metrics using simplified chromosomal representations. If more detailed and more precise similarity measurement is needed, for the representing set _{i} we can take all local extreme structures, instead of the point-wise projections on the meridian lines (thus, 2D structures).

Other complementary structural study of images of chromosomes is supporting operations on chromosomes with multiple FISH signals, and detection of very small features on chromosomes, see (Jovanović et al.,

Infinite dimensional function spaces, in particular Hilbert spaces, have become a natural mathematical background for signal processing. A Hilbert space

A countable Fourier basis of _{n}:

〈_{n}, _{n}〉 = 1 for all

〈_{i}, _{j}〉 = 0 for

The number

A number of semantic distortions and complications occur if the system

Note that Fourier basis can be uncountable. However, the number of the nonzero coordinates is at most countable, which is the statement of the classical theorem that is stated below:

_{i} :

In signal processing, the standard Hilbert space is the completion of the space of continuous functions on the closed interval [−π, π]. Recall that the scalar product is defined by

The corresponding standard Fourier basis

Discrete Fourier transform and the fast Fourier transform (FFT) are the most common and most popular methods for the expansion of the numerical vector

One of the main assumptions is that a given signal

Consequently, in order to isolate and extract disjoint periodic component of a signal, it is necessary to successively perform the FFT with a

For instance, performing FFT for the signal from the Example 2.2 with

Furthermore, reducing

In particular, with a spectrogram with 50 equidistant spectra we can compensate possibly or certainly erroneous insight and understanding of circumstances induced by analysis of single spectra. Applying some interventions on Fourier spectrograms, e.g., (Jovanović et al.,

The example on Figure

Shown are FFT spectrograms of arterial pressure –AP in hemorrhage experiments, exhibiting the actions of AP modulators present in the AP regulation (system antagonists: renin angiotensin, of sympathetic nervous system and vasopressin).

Spectrograms showing the normal AP state and the spectrogram changes and regular feature destructions after administration of scopolamine methyl nitrate (Jovanović,

Some other issues are related to semantics of the signal processing by Fourier spectroscopy, see Spasić et al. (

Secondly, if the spectrogram contains small, hardly detectible, or imperceptible components in some cases they can be detected and extracted by application of the specific methods developed for the image processing. Some of them are applied for the analysis and detection of small features in chromosomes (e.g., Bradski,

We can conclude that measures applied in various classification problems have better semantic correspondence with the reality when used on sufficiently resolute spectrograms or on their features. Furthermore, it is clear that all relevant measures will involve similar invariants-features, with high context dependence.

Specific situations often change the approach for choice of the adequate measure for the complexity of features. In the case of chromosomes, the Euclidean geometry is replaced by the local chromosomal geometry induced by the corresponding geodesics (contours, meridians). In spectroscopy, possible measures will focus on some of the following.

Position of dominant lines;

Dispersion;

Second order FFT performed on extracted features;

Counting/comparing of picks within certain frequency range with the threshold ε;

Binary 0−1 measures defined by filters and maximal filters, for example connected to the position of higher harmonics.

Additional treatment of measures on spectra and spectrograms in more general settings is given and discussed in the next section.

The EEG resolution (the number of electrodes on the scull) has exceeded 2^{8} points more than a decade. Higher density of electrodes-signals for EEG will increase with technology development, and is expected to reach thousands soon.

The relationship of different signals within integrated neurological functions received significant attention in the last few decades. The focus was mainly on the problem of modeling brain connectivity. Developed models have led to the broad range of applications in numerous experimental laboratories, contributing to the rich discourses of fundamental importance in neuroscience.

Clearly, as every processing in the brain involves certain signal processes in the brain, any investigation of neurological signals almost certainly faces the most complex kind of signals. It is also well known that a highly complex system behavior mimics highly chaotic random systems.

For this reason, the successful modeling of stock market trends by Cleave Granger in late sixties and early seventies (Granger,

The initial Granger causality model improved by Geweke which for vector variables has a form

where _{1}(_{n}(

where

Here _{xx}(λ), | | denotes determinant and _{xx}(λ) is the upper left block of the spectral density matrix

Finally, _{2}(λ) is the matrix of error variance.

The idea of Geweke that directed causality between the two nodes

In the implementations the major connectivity measures are estimating:

Connectivity between two nodes

Direction of connectivity between

Intensity of connectivity between

All of this properties are integrated into a single measure, while generally neglecting the frequency λ at which causality is constructed, replacing it with the maximum over a frequency interval Λ.

Following Geweke, Kaminski, and Blinowska introduced a modification called direct transfer function, defined by

measuring causality from

Sameshima and Bacala proposed somewhat different approach in modifying Geweke measure (Sameshima and Baccala,

Here _{ij}(λ) is the _{j}(λ) is the _{j}(λ).

Earlier, they also introduced the direct coherence measure with the intention to estimate direct connectivity between nodes

More recently Sameshima and Baccala introduced information PDC and DTF (Takahashi et al.,

and

Here _{jj} is the variance of the so called partialized innovation process ζ_{j}(_{j}(_{j}(_{j}(_{l}(

Let us mention that numerous experimental teams used the above measures in their discoveries where the above measures reached highest popularity in the formation and formulation of the key conclusions and results, including further modifications (Brovelli et al.,

In (Kasum et al.,

Presenting three qualities (1) integrally, we are neglecting differences in their importance and masking the most important aspect—being connected. For this reason, we proposed their separated analysis with certain additions, which can result in the different insight of the local inconsistency in the above methods. This is briefly shown on the Figure

Connectivity difference between DTF and PDC (Kasum et al.,

On the left diagram is shown connectivity difference between the two measures with corrected statistical zero value. The right diagram contains the same connectivity difference between the two connectivity measures after the natural harmonization of the two experimental zeroes. The consequence is the loss of the connectivity difference in the example illustrating by the authors of PDC the difference and the advantages of their method. Manipulating with different values of statistical zero, one can reach arbitrarily desirable conclusions. Since, we earlier have shown that the DTF is exposing abundant connectivity, when almost everything is connected (D. Adams axiom), now the same will be true for PDC as well, only if sensitivity is sufficiently adjusted, not as far as in the original measure comparisons (Sameshima and Baccala,

Some alternative approaches were suggested by other research teams (Kroger et al.,

On the other hand, we introduced the concept of weak connectivity (Kasum et al.,

For a set

Here

The use of

Besides the above considerations, we also recommended the connectivity being considered over the time interval

For the power spectra product of the initial time point _{E}(λ) = π_{E}(λ,

and

as connectivity measures over the time interval

Other methods to establish connectivity on these higher structures are available. Once connectivity between the sets of signals is established, we might consider other two properties: the connectivity direction and intensity.

Biological research, centered on biological signals is in explosive expansion, with neurological contents leading in complexity. With 100 B (Billion) neurons and some its exponent of neuronal connections, the individual brain, as an information processing system responsible for all knowledge accumulated in history, plus a lot of other behavior, exceeds by far the complexity of the whole Internet processing, with all rich parallelism and powerful computational nuclei.

The unknown complexity of individual working brain is far out of reach of our understanding yet. Certainly, it is the most powerful function humanity met in history. Numerous of the processes are multi valued, certain processes binary, dispersed over a range of frequencies. It is the hardest possible approach to learn the unknown functionality from the hardware and individual signal sources. With the simple personal computer it would be a very hard way to reach understanding of software system controls involved, especially all the components of the operating system.

Yet, there are already conferences and discoveries related to the operation of human consciousness, which was until very recently a “nonscientific category.” The approach of parallel investigation of multitude of tasks is promising, as some of the issues are being resolved from multiple projections. The number of combined teams of scientists engaged in brain research is growing, engaging significant resources, which might prove useful.

Mathematical methods briefly discussed here and much more are a product of the brain, thus having its representation and life within the brain much before it is used in brain modeling. Thinking in this way we could be sure all of Mathematics so far applied in biological signals is anything but too complex, as we never experienced the situation when very complex is completely described by very simple.

Nevertheless, we should mention some issues that will be faced sooner in much simpler environments like Quantum Physics and Cosmology. People usually consider Mathematics as a tool set sitting on the shelves, ready to be applied by whomever in whatever capacity and fragments of its developed contents, with all time growing complexity, as natural scientists and engineers are learning more of Mathematics. And this is good, as Mathematics is a public property. The history teaches us that it is hard to guarantee, even for the most abstract parts, that any of discovered Mathematics will never be needed by application. This is the only security for the future of Mathematical funding.

With the growing complexity of the applied mathematical concepts, we are approaching some serious issues of foundations of Mathematics. Before that, let us mention that the symbol ∞ does not represent infinity uniquely since Cantor's discoveries in 1873, when he showed that arithmetical and geometric infinity, i.e., natural numbers and real line are different infinite quantities. As a consequence, infinity has been scaled in terms of pairwise different cardinal numbers. However, the size of this scale is enormous; it cannot be coded by any set. This was the creation of Set theory, and the beginning of the studies of foundations of Mathematics, which is probably never ending.

When dealing with simplest measurements and simplest Euclidean measures we think that everything can be measured. One can only imagine the disappointment of Lebesgue who developed the beautiful completion of measure and integration, when Vitali find a rather simple set on the real line which is not Lebesgue-measurable. In fact there are

The existence of immeasurable sets is highly counterintuitive. These sets cannot be sketched, they are totally amorphous. Sets with fractal boundaries can be seen as a bridge toward the intuitive visualization of immeasurable sets.

From earlier examples, namely, from Lobachevski discovery of non-Euclidean geometries, in twenties of 19th century, which was against all believes of the nature of Geometry, after he showed equiconsistency of the first non-Euclidean Geometry with the anciently perfectly founded Euclidean Geometry which we still learn in the schools, we learned that Mathematical theories, packed around their axioms can be at the same level of logical certainty, while obviously impossible mixed together since with colliding axioms.

And within very short time-a few decades, that discovery gave rise to the huge developments in Geometry, immediately picked up by the most prestigious theoretical physicists as proper Cosmometry (Geometry of the Universe, or its specific parts, e.g., environments of black holes). Concerning the issues related to all measures, we have to say that numerous depend on the axiomatics for Mathematics which is the defining Geometry of the Universe of Mathematics. And there are alternatives combining a smaller set of fundamental axioms and their weaker or stronger versions.

Without entering a discussion that does not belong here, let us just say that AC (Axiom of Choice) is very much needed in the foundations of Mathematics, but there are alternatives. AC implies that Lebesgue measure is not total. However, it implies that there are numerous mentioned measures that are total. Banach proved that there is a total extension of Lebesgue measure which is countably-additive, while, as the Solovay theorem shows (Solovay,

On the other hand, we can stay on the flat Earth and deal only with short approximation of the phenomena, avoiding entering the zone of the complex Mathematics and its fundamental issues. Yet, as proved by Goedel, we cannot escape the hot issues even remaining only in Arithmetic, nor in any theory containing its copy (like Geometry).

Our aim was not to deliver a comprehensive overview of the all metrics and measurements involved in the contemporary biological studies. We have been focused primarily on our work. However, it is prudent to at least mention some of the important topics that are missed here.

The first is related to methods for fractal analysis developed initially for the fractal dimension of observed time series from human physiology and performance. We refer the reader to (Holden et al.,

The second is related to measurement of self-affine structures and a spectrum of scaling parameters. An example of this kind is the detrended fluctuation analysis presented in (Kantelhardt et al.,

The third is related to the recurrence quantification analysis based on the Taken's theorem. For more information we refer the reader to (Webber and Marwan,

The fourth is related to properties such as ergodicity, anomalous diffusion and multiplicative interactions presented in (Molenaar,

The fifth and the final is related to application of non-commutative probabilities presented in (Brovelli et al.,

OK has written the initial draft of sections Measures and Metrics, Dimension, and Boundary-interior index; AJ has written the initial draft of sections Basics, Complexity Issues, Chromosomes, and Fourier spectroscopy (together with AP). AJ and AP have written the initial drafts of Introduction and Discussion. All authors have participated in revision and proofreading of the present version of the manuscript.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The handling editor declared a shared affiliation, though no other collaboration, with the authors at time of review. The reviewer ZMN declared a shared affiliation, with no collaboration, with the authors to the handling editor at time of review.

Images at Figures