A complex system is often associated with emergence of new phenomena from the interactions between the system’s components. General anesthesia reduces brain complexity and so inhibits the emergence of consciousness. An understanding of complexity is necessary for the interpretation of brain monitoring algorithms. Complexity indices capture the “difficulty” of understanding brain activity over time and/or space. Complexity–entropy plots reveal the types of complexity indices and their balance of randomness and structure. Lempel–Ziv complexity is a common index of temporal complexity for single-channel electroencephalogram containing both power spectral and nonlinear effects, revealed by phase-randomized surrogate data. Computing spatial complexities involves forming a connectivity matrix and calculating the complexity of connectivity patterns. Spatiotemporal complexity can be estimated in multiple ways including temporal or spatial concatenation, estimation of state switching, or integrated information. This article illustrates the concept and application of various complexities by providing working examples; a website with interactive demonstrations has also been created.

“There’s no love in a carbon atom, no hurricane in a water molecule, no financial collapse in a dollar bill.” — Peter Dodds

## What Are Complex Systems?

We could add the phrase “. . . and no consciousness in a neuron” to the quote above. Traditional reductionist science has achieved great explanatory success by constraining questions into tightly controlled experiments. However, it has become clear that there are limits to reductionist explanations. In the real world, many natural phenomena appear only when there is collective behavior arising from interactions between the components within extended systems—known as complex systems. Typical examples are given in the introductory quote. The study of complex systems has a sporadic intellectual lineage that covers almost the whole breadth of science, but about 30 to 40 yr ago, it became generally accepted as a legitimate research methodology that is complementary to the reductionist paradigm, as marked by a deluge of publications and a recent Nobel Prize in Physics awarded to Giorgio Parisi. At present, there is no general theory of complex systems and therefore no unitary definition of complexity. An intuitive understanding of complexity would be of an index which captures some “difficulty” associated with the system, such as the difficulties of describing or creating the system and its organization. A complex system is marked by many disorderly interacting elements that produce unpredictable and adaptive dynamics and transitions, self-organization, signs of criticality, and all sorts of qualitatively different phenomena—so-called emergent properties. These are properties that cannot be observed or inferred by studying the isolated individual components but that arise only when the components are interacting together in a system. Typically, emergent properties are very sensitive to the type and strength of the interactions between the components. Furthermore, a complex system can be described accurately with only a fraction of microscopic details that constitute it.

How does the study of complex systems influence the practice of anesthesiology? The whole body is a structured but adaptable system, made up of large numbers of interacting components at many different scales. As a result, complex system phenomena are seen in all aspects of physiology and pharmacology. In this toolbox, we focus on the brain and its electrophysiologic output: mainly spontaneous electroencephalography (EEG) recordings but also magnetoencephalography or evoked responses. The brain is a complex adaptive system that gives rise to all sorts of emergent phenomena, such as consciousness and memory. We assume that the mesoscopic “state of the brain” is captured by the spatiotemporal patterns of its electrical field; that these are a manifestation of regional information flow in the brain; and that the syntax of information flow is required to generate the semantics that underlies consciousness. While in line with most proposed scientific theories of consciousness,^{1,2 } we acknowledge that these assumptions ignore many things, perhaps most importantly the underlying chemical milieu. Furthermore, the EEG is only a spatially coarse measure of the underlying electrical field, and it remains to be seen to what extent it represents the true nature of its parallel brain state.

At first glance, it may seem unlikely that the function of such a complex system could be captured in a single number that quantifies its complexity. Nevertheless, over the last 20 years, a large body of literature has appeared that relates consciousness and brain complexity.^{3,4 } This has been catalogued by Sarasso *et al.*,^{3 } and we have summarized a comprehensive list of almost all the publications pertaining to anesthesia in Supplemental Digital Content 1 (https://links.lww.com/ALN/C868). The underlying assumption is that consciousness can only emerge if the brain is able to access and coordinate a suitably large repertoire of states, so it is alluring to imagine that measures of complexity might reliably track anesthetic impairment of consciousness, and most of the studies in Supplemental Digital Content 1 (https://links.lww.com/ALN/C868) are supportive of these ideas. However, this begs the (unanswered) question: “Exactly which *index* of brain complexity best captures the *actual* brain complexity that is specifically related to consciousness?” Many of the concepts presented in this review are somewhat abstract, so we will use the common analogy of an orchestra to try and make the arcane methodology more tangible. In essence, the emergence of the majesty and beauty of music from an orchestra is a metaphor for the emergence of consciousness from sufficiently complex brain dynamics. The different sections of the orchestra correspond to the different brain regions, and the various melodies and harmonies correspond to the EEG signals from different channels. It is important to note that we do not need to claim that complexity is identical to consciousness, only that sufficient complexity is a necessary prerequisite for the emergence of consciousness. The issue of the definition of consciousness is beyond the purview of this paper; we use the word “consciousness” to imply a simple phenomenologic sense of existence.

## Types of Complexity Indices and Their Relationship to Entropies

As previously mentioned, complexity may be defined as the difficulty in describing a phenomenon. Because there is no single formal definition of complexity, many (generalized and specialized) indices have been proposed, based on a particular aspect of the complex system that is difficult to describe or create. What it means to be complex may indeed vary depending on the system under consideration. We follow an intuitive taxonomy of complexities, proposed by Shiner *et al*.^{5 } The system may be complex because it has a complicated intrinsic structure that is difficult to describe (type 3 complexities); because its basic design or output is relatively simple but has a lot of randomness that is difficult to describe (type 1 complexities); or because it has a mixture of both (type 2 complexities).

It is easy to specify a perfectly regular (predictable) system, but it is more difficult to exactly specify the irregular output from a random system. In thermodynamics and statistical physics, the number of ways that small components can be arranged to produce a certain large scale system pattern, is its “entropy.” *Via* subsequent developments in information theory, many mathematical formulations of entropies have been developed that quantify the degree of irregularity, randomness, or predictability in a system. Thus, entropies are often used as type 1 complexity measures,^{5 } in which the complexity increases with increasing randomness as shown by the complexity–entropy diagrams in figure 1 (A and B).^{6 } To be majestic, music should not be too predictable.

The problem with the use of type 1 complexity measures is that maximum randomness does not necessarily correspond to emergence phenomena, because maximum randomness indicates that the system is in thermodynamic equilibrium,^{7 } whereas one of the features of complex systems is that they are not in equilibrium. To address this, a number of indices have been proposed that have low values both when there is perfect regularity and also when there is unconstrained randomness but that peak somewhere in between these two extremes, when there is some intricate structure or pattern and some randomness. An example of this inverted U shape is seen in the complexity–entropy diagram of figure 1C. These are type 2 complexities. They are also called “statistical complexities”^{8 } because while it is difficult to precisely describe the individual points in a completely random signal, it is often simple to describe the statistical distribution of a random output. Typically, they are calculated by multiplying the complexity term by its distance from equilibrium. They are intuitively appealing, as they are maximal around the point where the system is in a state that is neither too rigidly regular nor too unpredictably random.^{7,9 } This is related to a zone of “criticality” (explored in more detail later in the text) and is currently the subject of intense neurobiologic research. It is easy to imagine that a conscious brain exhibits a controlled flexible agility indicative of function somewhere around this sweet spot. Disturbances in consciousness—as seen in both anesthetic coma and grand mal seizures—might be manifest by decreases in type 2 complexity. An orchestra is most impressive when the music is not perfectly predictable but also not completely random.

Type 3 complexities identify complexity with maximum order or structure.^{5 } The (complex) wakeful cerebral cortex undergoes a plethora of states and changes dynamically. To respond to changing environmental demands, it ought not to be too rigid. Because of this, as well as the irregular structure of the brain, these complexities have not been widely investigated in anesthesia research yet (although our example in fig. 2 suggests more work is needed in this regard).

As pertains to the brain and anesthesia, the best choice of complexity index is unclear but would be one that is maximal when consciousness is maximal. If we are using the temporal complexity from a single EEG channel to estimate the size of the brain’s repertoire of states, then type 1 indices (such as Lempel–Ziv complexity, permutation entropy, and approximate entropy) seem to work quite well, because consciousness often emerges when the EEG signal has high randomness, which typically displays a flat power spectrum. Anesthetic unresponsiveness is usually associated with increased alpha and delta EEG oscillations and so is marked by loss of the flat power spectrum. The commonly used indices of depth of anesthesia, such as the Bispectral Index and spectral entropy, have been heuristically derived but essentially incorporate and quantify the narrowing of the power spectrum caused by most hypnotic drugs. When compared in clinical data sets, the permutation entropy is usually closely correlated with the Bispectral Index, to the point that it can almost be used as an open-source surrogate. The choice of complexity index that can also naturally include the burst-suppression pattern during deep anesthesia is problematic. However, we note that approximate entropy appears to correctly classify even burst-suppression patterns at deep anesthetic doses,^{10 } without the need of an additional burst-suppression algorithm, such as implemented in the Bispectral Index. However, the situation is less clear in relation to hallucinogenic states, in which type 1 indices are typically greater than their nonhallucinatory wakeful values, and it is claimed that hallucinatory consciousness is greater than normal wakefulness.^{11–15 } As an example, figure 1 shows how type 1 and 2 complexities are altered by isoflurane and ketamine in anesthetic and subanesthetic doses. It demonstrates that type 1 complexities fail to separate anesthetic ketamine from baseline wakefulness and in fact can show some very erroneous high values for anesthetic ketamine (see the *two blue squares* on the right-hand sides of the plots), whereas the type 2 complexity correctly attributes low values to these points. Both type 1 and 2 complexities correctly produce low values with isoflurane unconsciousness.

In contrast to analyzing irregularity in single EEG channels, if we are measuring spatial brain network connectivity to quantify the brain’s degree of organization, then type 2 or 3 indices may be more appropriate. This is because the wakeful state is not marked by random connectivity but emerges when there is some connectivity structure and pattern.

The rest of this paper will give examples that highlight some technical details about approaches to the calculation of various complexity indices as they pertain to anesthesia and the brain. We hope that this will facilitate further research that is needed to establish whether these ways of looking at brain function and consciousness are useful in understanding mechanisms of anesthesia and correlates of consciousness. To assist this, we have included in Supplemental Digital Content 2 (https://links.lww.com/ALN/C869) the MATLAB scripts used in these examples (figs. 1, 2, and 3) and also provided an online repository for interactive demonstrations of various concepts and Python functions to calculate complexities.^{16 }

## Temporal Complexity

Most studies have applied various complexity algorithms to a single-channel EEG time series and have successfully shown that anesthesia makes the EEG more simple/predictable. This is analogous to listening to a melody from the symphony as played on a flute. It has not captured the whole grandeur of the music but often encapsulates the main themes. A series of methodologic choices are necessary to produce the final index. These include EEG montage, choice of frequency band or time segment length, formation of symbol sequence (zero crossing, permutations, choice of threshold), application of complexity or entropy algorithm, role of normalization, and surrogates. Because it is popular, we will use the Lempel–Ziv algorithm to illustrate the technical aspects of estimation of temporal complexity. Subsequent sections will discuss spatial and combined spatiotemporal methods.

### Lempel–Ziv Complexity

In essence, the Lempel–Ziv complexity algorithm quantifies how compressible a sequence of symbols is. A complex signal is one that cannot be summarized easily. Lempel–Ziv complexity is the basis of .zip file compression, is an estimator of entropy rate and hence is a type 1 complexity. Figure 3 depicts the details of the analytic process to derive the Lempel–Ziv complexity for a simple signal (such as the delta oscillation of deep anesthesia; fig. 3A) and a complex signal (such as might be seen in the awake brain; fig. 3B). First, the EEG is thresholded to produce a binary sequence of ones and zeroes, or it can be extended to a sequence of a small number of symbols.^{17 } Like Morse code, these sequences can be viewed as “words” (fig. 3, C and D) or musical phrases. The algorithm then counts the number of new words by assessing the nonreproducibility from the previous history (fig. 3, E and F).^{18 } We see that a regular or simple signal has only four types of words, whereas the complex or irregular signal has a larger diversity of eight sequences or words and hence a higher Lempel–Ziv complexity. The music is more interesting with a variety of phrases.

### Effect of Signal Properties

What does the Lempel–Ziv complexity mean biologically? In this section, we outline what signal properties drive changes in Lempel–Ziv complexity and what this may mean in the context of anesthesia.^{19,20 } We have summarized a list of relevant publications in Supplemental Digital Content 3 (https://links.lww.com/ALN/C870). In brief, some of the key factors are signal frequency, signal-to-noise ratio, noise bandwidth, and waveform shape. As signal frequency increases, Lempel–Ziv complexity also increases (fig. 4A). This is because higher frequencies have more zero crossings and a larger dictionary. For similar reasons, as the noise level increases for a constant amplitude signal, Lempel–Ziv complexity also increases (fig. 4B). Noise bandwidth and “color” (frequency content) are also important. Finally, if the signal becomes more nonsinusoidal, Lempel–Ziv complexity is not affected, because zero crossings (and hence zeroes and ones) remain the same (fig. 4C). Thus, Lempel–Ziv complexity of the EEG is high in wakefulness and when there is lots of frontalis muscle tone. It is important to recognize that other complexity metrics (such as permutation entropy, shown in fig. 4) may be altered by signal properties in quite different ways. More exploration of signal properties affecting complexity can be found in the interactive notebook.^{16 }

### Role of Surrogates

During anesthesia, EEG is dominated by slow waves, which are lower in frequency than those during wakefulness. These slow waves produce “long words,” causing the Lempel–Ziv complexity to decrease. Additionally, the underlying broadband, background EEG signal also changes structure and moves to lower frequencies resulting in a more negative spectral “slope” (exponent), which also decreases the complexity.^{21 } These effects suggest that Lempel–Ziv complexity is, at least in part, simply driven by anesthetic effects on the frequency content of the power spectrum. However, a strength of complexity metrics is that they could also capture nonlinear features of the signal, such multiplicative interaction effects, or stochastic variability. If our primary research question is whether “complexity” is offering information above and beyond those derived from the simple power spectrum, we can normalize it by complexity that has been computed for surrogate signals.^{22 }

Phase randomized surrogates are the most common tool for this process.^{23 } Here, Fourier phases of the data are shuffled while amplitudes are kept the same. This preserves the power spectrum but destroys nonlinear features of the signal. It is like keeping the famous four notes from Beethoven’s fifth symphony (duh-duh-duh-daaaa) but rearranging their order (duh-daaaa-duh-duh). The notes (frequency power) are the same, but the drama of the music has disappeared with the rearrangement. As this is a Fourier-based method, it assumes the signal is stationary. This means that to work properly, phase shuffling must be done on short enough time segments to be considered quasistationary (*i.e.*, unchanging average frequency and variance, typically 2 to 10 s). After creating surrogates (typically a few dozen to a few hundred), Lempel–Ziv complexity is applied on each one, and the original value for the signal is normalized by the mean of the surrogate complexities. In practice, results after normalization usually show lower but often still significant changes as anesthesia is induced.^{24,25 } This suggests that the complexity decrease seen in anesthesia is a combination of both power spectrum changes and nonlinear effects.

However, both power spectrum and nonlinear changes in the EEG may be important, and researchers should have clear hypotheses before applying complexity metrics. For instance, it is possible that slow waves may have a causal role in disrupting information flow in the brain.^{26,27 } If true at least in part, this would mean the frequency decrease is key to understanding loss of consciousness and complexity, and removing spectral effects by surrogate normalization may *not* be appropriate. On the other hand, if researchers want to explore specific effects that are not captured by the power spectrum or simply want to understand what is driving their complexity changes, phase randomization is a suitable method.

## Spatial Complexity

Spatial complexity is the complexity of the functional connectivity between brain regions. It is analogous to capturing the harmony between the different sections of the orchestra. Its calculation initially involves a number of steps to construct a suitable connectivity network, followed by the estimation of the complexity of the resultant spatial patterns of connections. The changes with anesthesia are more subtle than those seen with temporal complexities but no less important. Imagine an orchestra where the cellos start playing a symphony different from that being played on the woodwinds. Clearly, successful music requires the coordination and correct timing from all sections of the orchestra. Brain connectivity is essentially measuring whether widespread brain regions are playing small phrases in time and in harmony with each other. Network (or graph theory) measures of brain connectivity^{28 } are often used to indirectly summarize spatial complexity, and we refer readers to a review of network science in the study of anesthetic state transitions.^{29 } For a more classical illustration of spatial complexity, we work through the calculation of spatial complexity for an awake EEG and an anesthetized EEG, comparing the two different connectivity metrics (fig. 2).

### Problem of Volume Conduction

EEG signals arise from many sources in the brain that can generate electrical fields large enough to be recorded instantaneously by more than one scalp EEG electrode. This is called volume conduction and is a potential confound that can lead to spuriously high results for 0 or π phase-lag connectivity.^{30 } If we visualize the EEG channels as microphones recording the symphony, clearly a single microphone in front of the violins will also get some music from the violas. The strategies to mitigate this problem include the application of a spatial filter (such as surface Laplacian transformation or source localization) before connectivity analysis and the employment of various connectivity measures that are insensitive to instantaneous correlation.^{31 } The latter assumes that there is no true biologic cause of apparent zero-lag connectivity arising from a common source, such as thalamic oscillations that are induced by many common anesthetic drugs.

### First Stage: Derivation of Functional Connectivity Matrices

There are many methods to estimate the functional connectivity in the brain, and we refer the readers to a review on the metrics and their interpretational issues.^{32 } They quantify a small section of common activity between two electrodes and may be broadly divided into amplitude- and phase-based measures, which provide different insights about the altered states of consciousness induced by general anesthesia.^{33 } Phase-based measures determine whether individual musical notes are correctly synchronized between the flutes and the brass, whereas amplitude-based measures detect whether the flutes are increasing their loudness at the same time as the brass is increasing theirs. Figure 2 shows the construction of the functional connectivity network. We compare two main types of connectivity metrics: (1) an amplitude-based measure (amplitude–envelope correlation)^{34 } and (2) a phase-based measure (weighted phase-lag index)^{30 } from the EEG signals after applying surface Laplacian transformation. First, a band-pass filter was applied to extract the signals of 8 to 13 Hz in the alpha (α) frequency band (*e.g.*, two electrodes F3 and P3; fig. 2, A and B). The amplitude–envelope correlation was obtained by computing the Pearson correlation between the envelope time series derived from the magnitude of the Hilbert transform of the band-limited signals (fig. 2, C and D). The weighted phase-lag index measures the synchronization between the instantaneous phase signals (fig. 2E). It is dependent on the phase difference of the two signals (fig. 2F) but weighted by the magnitude of the imaginary component of the cross-spectrum (fig. 2G), which is 0 when the phase difference is 0 or π (fig. 2H). With the functional connectivity estimated for every pair of signals, figure 2 (I and L) shows the resultant connectivity matrices between each pair of electrodes, for amplitude–envelope correlation and weighted phase-lag index, during baseline and isoflurane anesthesia. It can be seen that the synchronized activity between anterior and posterior regions and within posterior regions during baseline were reduced in anesthesia. This is signified by most of the green, yellow, and red squares becoming blue.

### Second Stage: Complexity Estimation

In this example, we first apply the technique of singular value decomposition to capture how the complexity of the brain’s functional connectivity changes with anesthesia. Singular value decomposition is one way of summarizing how easy it is to condense the connectivity pattern. It is equivalent to seeing whether an acceptable symphony could be performed if the orchestra was reduced in size to include just the stringed instruments and no others. When this method is applied to EEG connectivity matrices (fig. 2, I and L), the singular values for the amplitude–envelope correlation and weighted phase-lag index methods are shown in figure 2 (J and M). General anesthesia causes a decrease in both the largest singular value (28.3% in the amplitude–envelope correlation and 51.4% in the weighted phase-lag index) and also in the diversity of all the singular values (38.3% in the amplitude–envelope correlation and 69.5% in the weighted phase-lag index). A limitation of such measures is that they reflect a global synchronization level; they have maximum values when all the time series are completely synchronized (*i.e.*, a rigidly regular state), and minimum values when all the time series are completely independent (*i.e.*, an unpredictably random state); this is an example of a type 3 complexity index.^{5 }

As previously mentioned, a type 2 complexity index would have a maximum value between the two extrema states; that might more closely reflect the conscious brain’s proximity to a state of criticality. A recently proposed metric, functional complexity^{35 } assesses the distance between the probability distribution of the connectivity values of all the channel pairs (*i.e.*, off-diagonal values in the connectivity matrix) and a uniform distribution. In the two extremal states, when all the time series are completely synchronized (or independent), the connectivity values will be dominated by these high (or low) values, thus corresponding to a narrow distribution and a low complexity; while in an intermediate state, the connectivity values will be more variable among different pairs of channels and thus correspond to a wider distribution and high complexity. Using this method on EEG connectivity matrices (fig. 2, I and L), figure 2 (K and N) shows the distribution of the connectivity values during baseline and isoflurane anesthesia. Again, general anesthesia induced 41.1 and 52.4% decreases, respectively, for both connectivity metrics.

## Spatiotemporal Complexity

The brain needs to process information in both space and time because brain regions are functionally segregated, and the brain’s computational goals change over time. Thus, it seems natural that, to truly quantify how complex the brain’s activity is, we need to consider both the spatial dimension and the temporal dimension, using multichannel EEG (fig. 5A). Many approaches to combine spatial and temporal information to produce spatiotemporal complexity have been proposed, but none has proven to be definitive, as yet. Most of the methods can be classified as either concatenation, state switching, or information integration methods.

### Concatenation

The simplest way to include information across spatial channels is to join the data together to make a long one-dimensional vector and apply the usual single-channel complexity metrics. This can be done either by concatenating channels across time points (fig. 5B, “temporospatial”)^{36 } or by concatenating time points across channels (fig. 5B, “spatiotemporal”)^{13,14,24,25 } and applying metrics such as Lempel–Ziv complexity.^{13,24,25 } The analogy for this would be the violins playing the whole symphony, followed by the woodwinds playing the whole symphony, followed by the brass playing the whole symphony and then calculating the average complexity of all three symphonic renditions. In theory, concatenation across time is independent from concatenation across space. However, in practice, these measures are highly correlated.^{37 } Complexity metrics can be applied on spontaneous or evoked EEG data.^{38 } As an example, the perturbation complexity index works by applying Lempel–Ziv complexity on a concatenated binarized signal after a pulse of transcranial magnetic stimulation.^{37,39 } The index discriminates unconsciousness well, even when applied to widely varying etiologies like sleep, anesthesia, and brain injury. However, it is clinically impractical, because it requires transcranial magnetic stimulation, high-density EEG, and source modeling. Although concatenated measures decrease with anesthesia, they have several limitations. First, they turn adjacent spatial correlations into slow temporal correlations (or vice versa), which is very different from identifying true spatial relationships. Second, all limitations relating to one-dimensional metrics still apply. Unless a type 2 complexity is used, we equate highest complexity with pure randomness.

### State Switching

Brain activity can be considered as switching between distinct spatial states (fig. 5C).^{40 } These states are assumed to be metastable, so the brain switches between them irregularly.^{41 } Hence, another way to capture spatiotemporal complexity is to look at complexity of the states and their switching dynamics. The musical analogy would be that of identifying symphonic motifs and their development. These states have been identified using k-means clustering,^{42,43 } hidden Markov models,^{11 } or dimensionality reduction techniques.^{26,36,44,45 } Once we have identified the states, complexity can be calculated on the temporal transitions vector,^{42 } the size of spatial patterns,^{43 } or simply the number of states we need to effectively represent the data.^{36 } Alternatively, each state can also be defined as a set of binarized active/ inactive states (a “coalition”), in which active states are either those above a threshold (amplitude–coalition entropy)^{13,24 } or those synchronous in phase (synchrony coalition entropy^{13,24 } or connection entropy^{46 }). The complexity is then quantified as the entropy (temporal diversity) of these coalitions. As another example, a variant of the perturbation complexity index, based on state transitions, was recently proposed.^{44,45 } This applies principal component analysis on an evoked potential average, quantifies state transitions in the temporal recurrence matrix, and computes complexity as the sum of number of state transitions across principal components. This can be thought of as: spatial × temporal complexity; *i.e.*, capturing spatial and temporal information without having to reduce data to one dimension.

### Integrated Information and Related Measures

Integrated information theory has received some attention as a candidate theory of consciousness. It proposes to equate conscious level with Φ, the effective information of the minimum information partition of a system (fig. 5D).^{2 } Motivated by theoretical arguments, researchers have tried to operationalize complexity as integrated information.^{47–50 } Many attempts compute metrics based on mutual information on subsets of data channels.^{47,51 } This is problematic as mutual information is notoriously hard to calculate on continuous data without imposing restrictive assumptions on the signal, such as assuming it to be Gaussian.^{52 } More recent attempts have used modified Φ metrics with fewer assumptions; *e.g.* based on autoregressive models.^{49 } However, these have their own issues, such as having possibly negative values and having high values in bursts during burst suppression, a state of deep unconsciousness. All Φ-related metrics are also computationally very expensive as ideally all partitions should be examined.

### Criticality, Scale-free Behavior, and Other Methods

We have concentrated on the complexity of temporal and/or large-scale spatial patterns, but the complexity of the interactions between different spatiotemporal scales is an important component of brain function. This is manifest in “power laws,” which may be measured in various ways including bilogarithmic plots of EEG power spectra (also referred to as 1/f noise, pink noise, or aperiodic activity),^{22 } detrended fluctuation analysis,^{53 } or neural avalanche sizes.^{54 } It demonstrates scale-free behavior, meaning that structure in the brain exists on multiple scales with no central tendency. A specific mechanism leading to such power law behavior is self-organized criticality, where the critical point is stable and reestablished if the system is perturbed. This may be a plausible mechanism for *how* complexity arises in the brain, not just *what* it is.^{55,56 } A related analysis is to look at eigendecomposition of autoregressive models, which show critical behavior near unity eigenvalues.^{54,57 }

Changes to criticality and scale-free parameters have been observed in anesthesia.^{21,43,53,54,57–59 } A limitation so far has been the difficulty of interpreting criticality metrics. The language of criticality needs to be linked to a wider understanding of functional connectivity, brain organization, and other measures of complexity. Critical systems show a close correlation between functional and structural networks, as indexed by pair correlation function.^{48,60 } A large value of pair correlation function means that phase configurations are highly variable over time, and as such, the system can also be considered metastable.^{61 } Furthermore, being near the critical state may be what allows the brain to have a large amount of integrated information in the system.^{48 } With all the above in mind, a full review of criticality and its relationship to brain complexity is outside the scope of this toolbox, although there is a need for further synthesis of research in this area.

Another alternative metric with a radically different approach is the emerging field of topologic data analysis,^{62 } which is based on the intuition that the shape of data and its basic topologic features (*e.g.*, number of holes) are important. It is robust to noise and deformations, and a recent study showed differences in topologic properties after application of ketamine and propofol.^{63 } Spatiotemporal information is naturally considered as each channel represents a dimension and each time sample a point in a high-dimensional embedding space. The complexity is then related to the number of cycles (holes) in this space and their properties. However, more work needs to be done to make such abstract results interpretable in terms of brain activity and connectivity.

## Conclusions and Recommendations

There are many different ways to extract temporal and/or spatial information into complexity metrics. Most show a good separation between conscious and anesthetized states, but none has yet emerged as definitive. When evaluating the relevance of a publication or designing an experiment, it should be specified which emergence phenomenon is of interest and what its important temporospatial scales are. Conscious perception seems to emerge over the 100-ms to 2-s time scale, working memory over a longer scale. There should also be some hypothesis or indication about whether the emergent phenomenon arises from conditions of maximum randomness (use type 1 complexity measures) or whether it arises from conditions with some structure or criticality (use type 2 or 3 complexity measures). The reader or experimentalist must acknowledge that there are numerous technical details that have a big influence on the results (see box 1). As such, presentation of more than one metric of complexity might give an indication of the robustness of the conclusions. It is necessary to understand how the index is affected by various patterns of noise and artifacts, the issue of volume conduction, how to generate surrogate data, and whether they are pertinent to the research question. Anesthesia profoundly disturbs cerebral neurodynamics at many levels. The concept of complexity has the appeal of succinctly capturing the crucial abstract properties of the brain that drives transitions between altered states of consciousness. There is greatness in the symphony of the conscious brain, but the art of how its complexities are distorted and disturbed by anesthesia is still an open question.

Is the brain complexity measured in time and/or space? (Do the data consist of single-channel time series and/or multichannel connectivity?)

Are type 1 or type 2 measures being used?

For time complexities, what is the electroencephalography (EEG) montage; choice of frequency band or time segment length; formation of symbol sequence (zero crossing, permutations, choice of threshold); application of complexity or entropy algorithm; and role of normalization and surrogates?

For spatial and temporospatial complexities, what is the EEG montage/choice of reference/Laplacian/source model; choice of frequency band; choice of interchannel coupling index; formation of symbol sequence (choice of threshold, weighted); choice of subregions or whole brain; application of complexity algorithm; and role of normalization and surrogates?

Supplemental Digital Content 1, 2, and 3 of this article

Our github worked examples/simulations (https://gitlab.com/marcoFabus/complexity_toolbox)

Lee U, Mashour GA: Role of network science in the study of anesthetic state transitions. Anesthesiology 2018; 129:1029–44

Sarasso S, Casali AG, Casarotto S, Rosanova M, Sinigaglia C, Massimini M: Consciousness and complexity: A consilience of evidence. Neurosci Conscious 2021; 7:1–247

Scholarpedia

### Acknowledgments

The authors thank George A. Mashour, M.D., Ph.D., and colleagues at the Department of Anesthesiology, University of Michigan, Ann Arbor, Michigan, who conducted the original studies that provide the EEG data used in the working examples. The authors also thank Jisung Wang and Heonsoo Lee, Ph.D., for sharing the MATLAB script of type 2 complexity, that had been developed at the Department of Physics, Pohang University of Science and Technology, Pohang, Gyeongbuk, South Korea, and William Taft, M.B.Ch.B. (Hutt Valley Health, New Zealand), for the metaphors used in the paper.

### Research Support

Supported by Department of Anesthesiology, University of Michigan, Ann Arbor (to Dr. Li); Department of Anesthesiology, University of Auckland (to Dr. Sleigh); and Wellcome Trust grant No. 203139/Z/16/Z (to Dr. Fabus). For the purpose of open access, Dr. Fabus has applied a CC-BY public copyright license to any author accepted manuscript version arising from this submission.

### Competing Interests

Dr. Sleigh is a handling editor for Anesthesiology. The other authors declare no competing interests.

## Supplemental Digital Content

Summarized Published Papers on Brain Complexity and Anesthesia, https://links.lww.com/ALN/C868

Matlab Functions Used in the Calculation of Complexity in the Figures, https://links.lww.com/ALN/C869

Interactive Website on How Signal Properties Affect Complexity Metrics, https://links.lww.com/ALN/C870