

Table of contents
Terminology for Individual Adaptive Systems
Basic Attributes of Living Beings
The Individual Adaptive System (Egostat)
The Cell as an Element of a Super-Egostat (the Organism)
Hierarchy of Egostats: The Key Principle
Symbiosis as Egostat Evolution
Homeostatic State Differentiator (DiffSigner)
Ego-Centricity of Adaptive Regulation
Images – Unique Combinations of Constituent Features
Implementation of the Basic Principles of Individual Adaptivity
Tree-Like Organization of the Image Hierarchy
Novelty in the Adaptive System
Priority Attention Channel (Aten)
Innate Reflexes (Genoreflexes)
Cerebellar Reflexes (OptReflexes)
Global Informedness Picture (Infocart or Infocontext)
Informational Functions (Infofunctions)
Semantic Understanding Model (Semantory)
Awareness Function Dispatcher (Dispatcheron)
Elements of the Awareness Process
Mechanism of Awareness Participation in the Awareness Process
Dominanta of an Unsolved Problem (Gestalt)
Organizations of the General Meaning (Themes) of Performed Actions
Play as an Evolutionary Means of Accelerating Individual Development
The structure of the system model of the subjective
On Subjective Experience and the Philosophical Zombie
Abstract
This book presents a great deal that is novel and unconventional, yet
everything is grounded in axiomatically verified research findings
(fornit.ru/a1) and empirical modeling (fornit.ru/beast). The goal is to propose
an implementation-independent, algorithmic, hierarchical model of a living
entity capable of adaptation, learning, and consciousness—without reliance on biological implementation.
This book presents a synthesis of the principles of the Individual Adaptivity System, as described in the monograph "Foundations of the Fundamental Theory of Consciousness" (fornit.ru/68715), peer-reviewed by prominent scientists (fornit.ru/69539) and published by Rusains Academic Publishing House. This synthesis has enabled a definitive liberation from the constraints of natural implementation specifics. A list of foundational articles published in peer-reviewed journals is available at: fornit.ru/66452.
A holistic architecture of an adaptive system is proposed, encompassing everything—from cellular homeostasis to consciousness, intuition, and creativity.
It is emphasized that implementation does not require neuron emulation and is feasible even on an ordinary personal computer.
Quantum hypotheses, panpsychism, and the “hard problem of consciousness” are rejected as unfounded.
Consciousness is interpreted as an adaptive process rather than a metaphysical entity.
Precise, functionally defined terms are introduced to eliminate the ambiguity and contradictions inherent in traditional concepts such as “instinct,” “unconditioned reflex,” and “homeostasis.” This inevitably creates a high entry barrier: even with a glossary available, the reader must exert considerable effort.
The text exhibits maximum information density. Each paragraph contains substantial conceptual blocks, requiring slow, thoughtful reading. This is not material for quick familiarization.
The model is structured as a hierarchy of adaptive levels, each new level built upon the previous one, offering a realistic path toward “strong AI.”
Most importantly, a clear algorithm of the awareness process is presented (without superfluous entities and explaining why this is not a philosophical zombie), grounded in the entire preceding organization of the adaptive system.
The book is recommended for specialists in AI, cognitive science, neuroscience, and philosophy of consciousness, as well as for anyone interested in constructing models of “strong AI” and prepared to overcome a high intellectual entry threshold. This is not merely a book—it is a research program expressed as a complete theoretical system.
Introduction
“Circuitry” refers to a design methodology based on cause-and-effect
relationships, characteristic of diverse implementation types—electrical, software-based, model-based, and others—where components are linked through causal interactions. Many
scientists consider it unacceptable to suppose that natural biological entities—and even more so, humans—are constructed on the same
principle as a television set, believing instead that life imbues living beings
with something as yet unknown, which accounts for such sacral phenomena as
consciousness. To think that nature creates through circuitry is deemed vulgar.
The “circuitry” metaphor may be perceived as reductionist, especially in the context of consciousness and emotions; however, such a label is inappropriate—for the following reasons: fornit.ru/67331.
In a collection of exotic theories of consciousness (fornit.ru/69716), one can observe attempts to relocate the essence of the psyche into unfathomably complex quantum effects or even to assert panpsychism—a fundamental substrate of animation. David Chalmers was such a philosopher, declaring the essence of consciousness an unsolvably hard problem. Yet careful examination reveals numerous incorrect assumptions: fornit.ru/69784.
It has been demonstrated that consciousness can be modeled entirely algorithmically, yielding a holistic model with adaptive functionality. This was achieved through software implementation of a prototype individual adaptive system (the Beast project, as well as the Isida project). There exists a philosophical trap—the “philosophical zombie” problem posed by D. Chalmers: even if a system exhibits all external attributes characteristic of an intelligent being, this does not necessarily mean it genuinely possesses subjective experiences. Such reasoning rests on the assumption that adaptive processes are one thing, while qualia are something fundamentally different—an assumption that is categorically false.
Unlike most theories of consciousness that remain at the level of philosophical speculation, the proposed model has been empirically verified.
In 2025, the academic publisher Rusains released the monograph “Foundations of the Fundamental Theory of Consciousness” (fornit.ru/68715), which presents the core framework of the model based on extensive factual research data. This monograph is largely tied to natural biological implementation, making it more accessible to neuroscientists and more evidentially robust, since nature has already implemented the described principles, thereby verifying the concept.
A working prototype of an individual adaptive system has been created (the Beast project: fornit.ru/beast), in which individual elements of the evolutionary hierarchy of development have been optimized and functionally validated up to the most complex components.
Implementation of individual adaptive system projects became possible by separating the specifics of natural biological realization from the pure principles of adaptive regulation. In particular, neurons are not fundamentally necessary components of the model; by abandoning their emulation, the required computational resources become feasible even on a modest consumer-grade computer.
The present book largely discards non-essential features of natural biological implementation, rendering the model more universal and implementation-independent. Many terms from biology and neuroscience acquire here more precise, rigorous, and correct definitions.
This raises the comprehension threshold; therefore, the system of interrelated terminological definitions has been formalized into a glossary, with a link to it consistently appearing in the top-left corner of each page. Reading the glossary alone constructs the conceptual framework of the individual adaptive system, facilitating a deeper understanding of the interrelations among the model’s elements.
Although the new terminological environment creates information overload, traditional terms are often insufficiently defined and even contradictory. In the domain of adaptive mechanisms, conventional terminology has become obsolete and impedes accurate understanding (e.g., confusion among different types of reflexes: fornit.ru/art7). Thus, adopting the new terminology was an urgent necessity. Mastering it to the point of fluency constitutes an inevitable difficulty in entering this new conceptual system.
The material in this book is not a description of a particular instance of natural biological implementation of individual adaptivity and bears no relation to neuroscience beyond the most general principles—those independent of implementation method. The book itself is a description of the system of such principles.
This book deliberately omits formulas, graphs, illustrations, and diagrams—not out of fear of complexity, but in pursuit of maximal semantic clarity.
Formulas are temptingly compact, yet they substitute genuine understanding with the habit of manipulating symbols. A reader may “swallow” an equation without grasping its essence and mistakenly feel they have understood it. In reality, they have merely memorized notations.
Images and diagrams, in turn, create an illusion of clarity. They fix thought within a single perspective, suppressing its inherent multidimensionality. Consciousness cannot be confined to arrows and blocks—it lives in the dynamics of significance, the branching of contexts, and the constant re-evaluation of connections. Any diagram inevitably oversimplifies and often distorts, imposing a false static quality on processes that are inherently recursive and fluid. This applies universally to all forms of popularization.
The primary aim of this book is not to provide a “picture,” but to activate within the reader the process of meaning-making. Meaning arises not from viewing an image, but from internal articulation, focused attention to conceptual logic, and the construction of one’s own living mental model. Only in this way does authentic understanding emerge—not borrowed, but grown from within.
Therefore, nothing is included that might distract from meaning: neither visual embellishments nor the false precision of inevitably simplifying formalism. There are only words—a rigorous, sequential, terminologically precise exposition of how the individual adaptive system is structured.
However, the book contains numerous supplementary materials accessible via short links to an aggregator website, where illustrative content may be found.
The monograph “Foundations of the Fundamental Theory of Consciousness” serves as the evidentiary core of this work, as it is based on a functioning model of evolutionary adaptive principles, supported by a vast corpus of published empirical research data, their comparison, and synthesis into a unified model framework. The monograph references an extensive body of the author’s own publications in scientific journals and has itself been peer-reviewed by prominent scientists. The concept was refined during the development of the Beast prototype and is accompanied by numerous supplementary explanatory and justificatory materials. This has enabled a streamlined exposition of the implementation-independent individual adaptive system by leveraging already verified components, rendering the book universally accessible as a textbook.
It should be noted that the egostat is not a narrow algorithm but a holistic architectural system encompassing all levels of individual adaptivity. Due to its scale and hierarchical complexity, a traditional “computational experiment” (e.g., accuracy on a test set) would be inappropriate: it would reflect only a behavioral fragment, ignoring systemic integrity.
Instead, model verification is achieved through a working software prototype (Beast), whose open-source code (fornit.ru/b_code) constitutes an executable formalization of the entire architecture. The code allows direct confirmation of the functionality of all claimed mechanisms—from the significance differentiator to the awareness cycle—and simultaneously serves as specification, implementation, and verification tool.
Thus, the scientific validity of the model is confirmed not by statistical metrics, but by its structural completeness, algorithmic implementability, and its capacity to integrate homeostasis, reflexes, attention, and consciousness into a unified causal framework.
Given the model’s scale and integrative nature, its primary value lies not in traditional empirical indicators, but in the clarity of its axiomatic foundation, its coherent structure, and the logical progression of its adaptive hierarchy. Full comprehension of this architecture itself serves as proof of the concept.
This is not merely a list of isolated terms, but a system of interrelated definitions. Crucially, the adaptive functionality of every term is explicitly specified.
For multi-word terms, single-word equivalents are proposed to replace obscure abbreviations: the full term is given first, followed by the single-word form in parentheses.
Vital (from “vital” – life-critical) – a normalized quantitative measure of a life-critical physiological parameter; a numerical characteristic of a specific type of condition critically important for sustaining life. Examples include concentrations of oxygen, carbon dioxide, glucose, and other indicators essential for survival. The value of a Vital ranges from 0% to 100% of its possible physiological range.
For instance: “Oxygen Vital – 82%” is intuitively clear even to a non-specialist.
Each Vital has threshold values; crossing these thresholds to any significant degree threatens life and necessitates restoration.
Vitals maintained within norm define a living entity, whereas fatal deviation from norm characterizes a non-living state. A Vital reflects the degree to which a parameter deviates from its adaptive norm, which itself depends on context (activity level, age, health status). The ensemble of Vitals determines the system’s viability: critical decline in any one may trigger transition to a non-living state.
A drop in oxygen from 95% to 85% may be tolerable, but from 60% to 50%—catastrophic. It may be useful to introduce a nonlinear scale or color-coded zones (green, yellow, red). 100% represents the optimal physiological value for a given state (rest, exertion, sleep, etc.).
Not all Vitals directly regulate the organism’s life-support systems. For example, the reproductive drive (gonadotropic behavior) ensures species survival rather than individual survival. Curiosity enhances adaptivity by discovering new opportunities, thus indirectly supporting life maintenance. Evolution may expand the Vital system if it confers advantage to the organism or species. Thus, parameters such as Altruism or Dissatisfaction with the status quo may emerge, functioning identically to direct life-support Vitals by fulfilling their own adaptive roles within the adaptive system.
Homeostat – the system responsible for maintaining Vitals within norm. Unlike the term homeostasis, introduced by W. Cannon in 1932 (defined as the self-regulation enabling an open system to preserve internal constancy), Homeostat denotes the actual regulatory mechanism—not a vague “capacity”—that keeps life-critical parameters within context-dependent norms (e.g., varying by activity level or physiological state).
For example, a glucose level of 4 mmol/L may correspond to 100% at rest but only 60% during intense exertion.
The suffix -stat (from Greek statikos – “holding, stabilizing”) is used in terms like thermostat or hydrostat.
The term Homeostat was originally introduced by William Ross Ashby in 1948 to describe a physical device demonstrating principles of self-regulation and adaptation in cybernetic systems. Ashby called it a “homeostat” because it imitated the ability of living systems to maintain stability.
Life – the functioning of a Homeostat system. Cessation of this functioning equals death. Absence of a Homeostat indicates a non-living object.
For a Homeostat to function fully in maintaining Vitals, it requires a minimal set of behavioral styles suited to restoring specific life parameters: feeding, exploratory, defensive, and reproductive behaviors. These styles may activate singly or in combination (e.g., exploratory + feeding).
Thus, Life is defined as that which possesses a system for maintaining parameters of existential stability through at least the following adaptive behavioral styles: exploratory, feeding, defensive, aggressive, and replicative.
“Feeding behavior” does not necessarily mean glucose replenishment. For some entities, the feeding style regulates glucose; for others—kerosene; for others—battery charge levels.
Plants are also living organisms because they possess a Homeostat, though their regulation is limited to physiological and biochemical processes. For instance, they regulate water and mineral uptake, open/close stomata to control gas exchange and transpiration, and synthesize protective compounds in response to stress. Thus, context-specific responses exist, and plants fit the proposed definition of life—albeit representing a fundamentally different adaptive trajectory.
Homeostatic State Differentiator or DiffSigner (from differentiator + significance + agentive suffix) – a mechanism that determines the magnitude of change in the organism’s state-significance following an action, i.e., the effectiveness of the action’s consequences.
The term “differentiator” here derives from the root meaning “difference” or “distinction.” State differentiation is the organism’s ability to discern changes in its internal condition. For example, the organism “notices” nausea after eating certain food and associates this state with the specific action (consuming that food).
This mechanism of adaptation and learning through experience is fundamental to survival and evolution in novel environments. It enables rapid adaptation—avoiding dangers and seeking beneficial conditions.
It is so essential, efficient, and universal that state-difference values are used both at the unconditioned-reflex level and at the psychic level for all ego-centric evaluations (i.e., assessments relative to changes in one’s own state)—except for evaluations based on arbitrarily chosen goals (e.g., self-sacrifice).
The organism continuously compares changes in its state. After any action (eating, physical activity, environmental interaction), every change is interpreted as a potential consequence of that action.
This ability to distinguish and classify action outcomes makes the mechanism a powerful tool for adaptation.
Evolution has optimized the temporal window within which the organism links an action to its consequences, enabling efficient causal binding.
If a person falls ill hours after a meal, the body may associate the illness with that food. But if too much time passes (e.g., a day), the link weakens.
The mechanism also helps differentiate deviations from homeostatic baseline. For example:
Thus, the organism “differentiates” its current state relative to the norm and seeks causes for these deviations.
The mechanism enables the organism to distinguish which actions or stimuli caused state changes—critical for learning and adaptation.
For instance, if a person develops a headache after drinking alcohol, they may “differentiate” this state as resulting from that specific action.
The organism not only detects changes but also classifies them as positive, negative, or neutral—guiding decisions about whether to repeat or avoid the action.
Pleasure from sweet food is classified as positive; pain from a burn—as negative.
Sharp changes in organismic state act as actual stimuli, triggering recruitment of the priority attention channel.
Significance – a quantitative measure of the adaptive value of any image (parameter, Vital, action, event, thought, or any image type), reflecting its contribution to homeostatic regulation.
Positivity indicates the degree of success in homeostatic regulation associated with that image; negativity indicates the degree of state deterioration.
Just as a school grade from 1 to 5 suffices to reflect performance, Significance requires no high-precision scale—a range from –10 to +10 is fully adequate. The scale is nonlinear: small values are most informative for response selection, while values near the maximum become increasingly compressed, with +10 representing an asymptotic limit of value growth.
This is an extremely simple yet deeply functional metric that unifies evaluation of everything—from physiological parameters to abstract images—within a single priority scale governed by the Egostat.
Significance scale: –10 to +10. This unifies all experience—physiology, emotions, thoughts, actions—into one evaluative framework.
Image – a functional structure designed to recognize a unique combination of input states. Each Image has its own uniqueness, expressible as a numeric identifier (ID), enabling system-wide addressing. In biological organisms, the analog is a synaptic ID.
Images in the Homeostat system come in various types: perceptual images (all sensory modalities), homeostatic state images (Vitals), action images, mental images (abstractions), etc.
Significance is itself a type of Image. Its ID specifies which aspect of the Homeostat system it evaluates.
This eliminates the dualism of “perception–action–value”: all are Images, differing only in input types and systemic functions. This enables hierarchical and recursive modeling—Images can embed other Images (e.g., the Apple Image = shape + color + smell + taste + Significance). It also makes the system addressable and controllable: via ID, any Image can be tracked, modified, amplified, or suppressed.
Thus, Significance is not “glued” to an Image but is a separate Image linked via ID. This allows:
Image Hierarchy Tree or Dendrarch (from Greek dendron = “tree,” plus -arch suffix denoting hierarchy, governance, or structure—as in hierarch, patriarch, anarch) – a tree-like structure in which Images are organized by hierarchical complexity: from simplest primitives to complex integrative Images, with contexts formed at each level.
Example: Bad → Fear → Night → Shadow → Humanoid figure → Grandmother Image.
Tree structures are natural and efficient for retrieval and hierarchical representation: earlier nodes function as categories, later nodes as category members. Moreover, with a fixed number of hierarchy levels (i.e., all branches ultimately have the same node count), the tree enables clear novelty detection when a branch is incompletely recognized—offering significant advantages over other hierarchical organization methods.
Starting from unimodal perceptual primitives, each new level integrates increasingly complex feature combinations, with terminal nodes unifying all sensory modalities into a single final Image. The number of such levels can be evolutionarily optimized—six may suffice. More levels allow more intermediate categories but slower image complexity growth, creating transitional states.
This applies not only to perceptual Images but also, for example, to Historical Memory structure and other hierarchical systems.
Behavioral Style or Homeocontext – a basic behavioral mode (feeding, reproductive, exploratory, defensive, etc.) that constrains possible reactions and within which new, ontogenetically acquired responses develop to achieve the homeostatic goals of that style. Basic styles are activated by an innate mechanism that optimally selects a group of styles based on current Vital states. Thus, basic behavioral styles serve as primary contexts determining reactions (i.e., base contexts).
Emotions – abstract reflections of behavioral styles. Emotions are Images representing combinations of active basic behavioral styles, establishing a motivational context for homeostatic goals and thereby forming the broadest objective of awareness: to generate a novel reaction alternative to habitual responses under current conditions, incorporating the novelty component.
Many theoretical models attribute to emotions properties beyond basic contexts—endowing them with complex “human” qualities. However, careful comparison of behavioral data, starting from behavioral styles, shows that emotions are simply Images corresponding to active behavioral style combinations—nothing more. More complex regulation occurs within their context.
Individual Adaptive System or Egostat – a homeostatic regulatory system maintaining a set of life-critical parameters within norm to prevent death. This necessity arises from natural selection: organisms with Homeostat flaws are eliminated from competition. Such flaws may include fatal actions.
Here, “Ego” does not denote the philosophical “Self,” but the functional center of survival priorities—ego-centricity.
The imperative to maintain Vitals in norm leads to the concept of ego-centric Significance: everything relevant to the individual’s adaptive functionality is evaluated through this lens. This is the core of motivation, value, attention, behavior—and even consciousness.
The term Egostat explicitly reflects the ego-centric nature of the system: everything important to “me” is determined by it. This emphasizes that value (Significance) originates in survival—not in an abstract “soul” or “reason.” It is a Homeostat that generates ego-centric Significance for all that supports survival.
Novelty – a characteristic of an Image reflecting the degree to which it has not yet participated in the Egostat’s current psychic-level adaptive regulation. At the reflex level, there is no Novelty, because all reflexes are triggered only within specific contextual conditions and do not require identification of new stimuli within known combinations (including ancient reflexes to loud, bright, or foul stimuli that may shift behavioral style).
Detection of significant Novelty is necessary to attract attention, as such Novelty renders conditions uncertain for existing reflex execution and may signal danger (most often) or opportunity.
Negative Significance detection is more critical for survival than positive, reflected in the brain’s disproportionately larger neural substrates for negative valuation versus reward centers. A habitual reflex in novel conditions may yield unexpected consequences. Hence, a vast portion of higher animal brains is dedicated to processing significant Novelty. Negative Significance is detected with higher priority—an evolutionary imperative: avoiding death is more urgent than gaining benefit.
Novelty lacking sufficient Significance, or with undetermined Significance, does not attract attention (it is simply unnoticed)—just as Significance with zero Novelty (i.e., well-learned conditions requiring no reinterpretation) goes unattended. This aligns with the formula for attracting conscious attention:
Image Actuality = Novelty × Significance
Note: Novelty ≠ Unexpectedness. Unexpectedness can be reflexive (flash → blink), whereas Novelty applies only to what requires conscious reinterpretation.
Novelty is a “blank spot” on the adaptive world map—a signal that the model must be updated.
Actual Stimulus or Orientant (from “orienting” + Latin -ans, -antis = “acting, exhibiting a trait”) – the current Image (sensory or mental) in the focus of the priority attention channel at a given moment, i.e., the Image with the highest Novelty × Significance product. The priority attention channel for interpreting novelty is singular: at any moment, it can process only one Image—the most actual among all active ones.
The Orientant triggers the orienting reflex—an innate program for reorienting attention and sensory resources toward a source of uncertainty or threat.
Images monitored for priority attention include both perceptual and mental Images from the interpretation process. Thus, the priority attention channel can switch from external to a more actual mental Image. Moreover, when deeply engaged in important interpretation, the channel raises its switching threshold to prevent distraction—but maintains a “sentinel mode” that still allows interruption by a highly actual new Image.
Priority Attention Channel or Aten (from attention) – the link between awareness processes and sensory perception or mental activity. Awareness mechanisms have evolved into a vast hierarchical system for increasingly deep and efficient processing of the actual stimulus.
According to A. Ivanitsky’s model, sustained stimuli in hippocampal feedback loops connect to the frontal lobes, with the orienting reflex selecting the most actual Image from all candidates. However, the frontal cortex also contains multiple reverberating loops—only one serves as the main awareness iteration, while others are interrupted or completed interpretation cycles (the unconscious). Within one such loop, ongoing processing may reveal extreme Significance—even if the stimulus was already in the priority channel—making this mental actuality itself an Orientant that captures attention. This is known as insight or illumination: a previously unconscious cycle becomes the main one and enters awareness.
Reflex – an unconscious program of response (external or mental) within specific contextual conditions. This aligns with I. Sechenov’s definition: “A standard responsive reaction of the organism to external influence, mediated by the nervous system,” but clarifies the role of context and emphasizes that reflexes are not part of awareness, though the mechanisms of awareness themselves are innate reflexes.
Sechenov wrote: “All acts of conscious and unconscious life, by their mode of origin, are reflexes.” However, some adaptive mechanisms respond not only to external (or internal sensory) inputs but are activated by a specific structure of informedness (B. Baars fornit.ru/70033, G. Tononi fornit.ru/70040, D. Dubrovsky fornit.ru/70862)—a context that guides the direction of awareness, just as external context guides reflex selection. Such mechanisms are not reflexes but serve to form new reflexes for novel contextual components.
A reflex is triggered by a unique combination of perceptual features (recognized by a context Image). It may be:
All Sechenov-style acts (stimulus–response in context) are reflexes—but only after they are formed. The formation process itself is not a reflex but a meta-process. In innate reflexes, this is genetic predisposition realized during ontogenetic maturation. In novel ontogenetic reflexes, it involves forming new connections (e.g., with perceptual primitives, repeated stimuli, or cerebellar circuits) or constructing responses via awareness processes.
Innate Reflex or Genoreflex – genetically predetermined connections that become functional after structural maturation during a specific developmental window (critical period for functional specialization). This term replaces the outdated “unconditioned reflex,” since all reflexes operate within specific conditions.
A Genoreflex may represent a complex action sequence achieving a homeostatic goal (instinct). Such chains are triggered depending on contextual specifics, so instinctive behavior consists of context-branching action chains. The term “instinct” is thus redundant and ambiguous—it doesn’t matter whether a chain contains 1 or 100 motor acts; all chains are always triggered by unique contextual combinations. Therefore, the term Genoreflex suffices. Behavior “branches” not because it’s “instinct,” but because each subsequent act is itself a Genoreflex triggered by a new perceptual feature combination.
Synonym Reflex or CloneReflex – a clone of an existing reflex that begins responding to a new contextual component, thereby expanding reactivity to new trigger stimuli. This replaces the outdated “conditioned reflex,” since the word “conditioned” is superfluous: all reflexes operate within specific contextual conditions.
A CloneReflex copies another reflex’s response but has its own structure (not just new connections), enabling extinction (to prevent random associations with transient stimuli; unused links weaken, keeping responses aligned with current relevance).
CloneReflexes require no reinforcement during formation—only a few repetitions of the new stimulus slightly preceding the old one (though intervals can be long, e.g., 24 hours; the link strengthens with repetition until fully established). This may seem counterintuitive: in I.P. Pavlov’s classic textbook experiment, the conditioned reflex formed only when the bell was followed by food. In reality, without food, no reflexive response occurs to the second stimulus, so there’s nothing to clone onto the new one. Consider another example: a conditioned reflex forms if touching a water bowl delivers an electric shock. The dog exhibits an unconditioned withdrawal reflex to the shock; after several pairings, the bowl image alone triggers withdrawal. This is called “negative reinforcement,” though withdrawal is simply a reflexive response to shock.
Reinforcement (food, shock) is not the cause of linkage but merely a means to elicit a reflexive response that can then be cloned.
A CloneReflex forms when a new (neutral) stimulus repeatedly precedes an old stimulus that already evokes a reflexive response. The brain copies (clones) the response structure from the old to the new stimulus—not because it’s “rewarding” or “punishing,” but because the new stimulus becomes a predictor of the old, and the response is shifted forward in time to enhance adaptivity.
Cerebellar Reflex or OptReflex (from “optimization” + “reflex”) – forms to support the fine-tuning of a newly developing ontogenetic reflex, optimizing force, coordination, and integration with concurrent actions. Without such reflexes, forming a new reflex would require numerous supplementary reflexes, greatly prolonging optimization (as seen in cerebellar pathologies). The cerebellum receives information about the reaction’s goal and forms reflexes ensuring its efficient achievement during new reflex execution.
These are second-order reflexes that “tune” primary reflexes to current conditions, ensuring goals are met efficiently, without excess energy expenditure or conflict with other actions.
Automatism or NoReflex (from Greek nous = “mind, intellect”) – a reflex formed by the priority attention system during awareness for a specific combination of conditions (organism state, sensory input, and actual trigger stimulus). A new automatism may reuse an existing response program or modify part of its sequence to achieve the goal under new conditions. The original action chain may be an innate reaction.
During awareness of an actual stimulus, the reaction chain can be paused at any link to either:
Example: musical instrument skill—initially each movement is consciously controlled; later, a NoReflex forms, enabling “automatic” playing while preserving expressiveness aligned with emotion and audience.
Orienting Reflex – an innate mechanism selecting the most actual stimulus from all active perceptual (and mental) Images. It ensures:
If a mental Image is most actual, it does not create a new hippocampal reverberation but instead organizes the main interpretation cycle in the frontal cortex, relegating others to the background (the insight phenomenon). Thus, the original perceptual stimulus remains held in the hippocampus (preserving working memory), while awareness may shift through several more actual mental Images, each updating the informedness picture—which serves as context for the next interpretation step.
I.P. Pavlov described an animal’s reaction to a novel, sudden stimulus as the “What is it?” reflex or Orienting Reflex.
Global Informedness Picture or Infocart / Infocontext – a working memory structure of the current interpretation state, containing slots for preserving current information elements.
Informedness activity creates the context for selecting the next interpretation step, which in turn updates informedness for the subsequent step.
Slot composition is evolutionarily optimized and may differ not only across species but also among individuals, forming the potential for varied interpretation capabilities.
Preceding theories: B. Baars, G. Tononi, D. Dubrovsky.
Information or Infoabstract – Images of any type linked to their ego-centric Significance within the current awareness process.
Here, “information” does not mean:
Instead, it refers to Images linked to ego-centric Significance, informing about some aspect of current awareness.
Data alone are always conditional knowledge (personal informedness) and thus accessible only to those who understand the conditional symbols of informedness. A cat seeing a book gains no information—it has no subjective Significance. All elements (significant Images) of the global info-picture constitute holistic situational understanding.
For an Image’s Significance to inform the subject, it must be brought into conscious attention. Outside the single actual-stimulus processing channel, there is no informedness—equivalent to awareness, since informedness is possible only through stimulus awareness.
The Significances of individual Images contributing to the global info-picture serve as elementary components of informedness (quanta of consciousness). Together, they create the conscious context of the current situation and interpretation stage—i.e., the subjective experience that evolves with each interpretation step.
Interpretation Cycle or Iteron (iteration of awareness) – a sequence of steps in which information is processed and interpreted. These are the actual-stimulus processing mechanisms of the awareness system, involving information retention in memory and analysis for decision-making.
Awareness Function Dispatcher or Dispatcheron – an innate system managing awareness cycles, using global informedness as context to select the direction of the next interpretation step.
Informational Functions or Infofunctions – innate structures specialized in retrieving information in response to mental queries from the Awareness Function Dispatcher. These may include mechanisms for:
Each interpretation step invokes an Infofunction, whose output updates the Infocontext for the next step.
Abstraction – a structure representing a unique mental Image (by ID), linked to Significance within a specific context. It is a quantum of consciousness. Abstractions are universal: the same abstraction (e.g., One or Good) evokes identical understanding across all minds.
Unlike rigidly recognized Images, abstractions permit arbitrary operations.
Goal – an Abstraction representing a desired action outcome, i.e., an Image with assigned Significance for the result of actions to be achieved.
Consciousness – activity of the voluntary attention channel, comprising interpretation cycles: the main (experienced, updating the “movie frame”) and background cycles (unconscious).
Interpretation – the main iteration of the awareness cycle: the process of resolving the alternativity to habitual responses under novel conditions, thereby updating the Infocontext.
Thinking – a deep, sequential series of interpretation steps (iterations) refining informedness to solve a goal-directed problem or engage in passive fantasy. If no solution is found at evolutionarily earlier interpretation levels, the process deepens to a more flexible level, accompanied by specific mental experiences of problem-solving stages.
Unconscious – the ensemble of active but non-main processing cycles in the frontal cortex. Not “repressed,” but “lacking access to Aten.” The source of insights.
Awareness – the strength, clarity, and intensity of current informedness experience. Not “attentiveness,” but a measure of the depth and efficiency of the awareness process.
Voluntariness – anything that is alternative to habitual (reflexive) responses under given conditions. The foundation of psyche as a “system of voluntary-level adaptivity.”
Meaning – the conscious evaluation of an Image’s Significance, enabling definite orientation toward it (avoidance or pursuit).
Understanding Model or Semantory – the system of an Image’s Significances across all evaluated contexts, linked to memory of specific rules stored as experiential records in Historical Memory. Enables instant Image interpretation upon activation. Infofunctions can compare and generalize Semantory data to evaluate properties and meaning of attentional objects.
Historical Memory – the combined semantic and episodic memory, forming sequential “frames” of interpretation moments for later retrieval by Infofunctions.
Dominanta of an Unsolved Problem or Gestalt – memory of a problem state that could not be resolved during an awareness cycle and, due to its Significance, is deferred for future resolution. During subsequent awareness cycles, the Dominanta’s stored information is compared with current data, potentially enabling analogical solution accompanied by insight. In psychology, this phenomenon is described as a Gestalt.
All living beings share the same fundamental principles of adaptive organization. The most essential of these are the characteristics that maintain the organism in a recognizably alive, integrated state—Vitals. It is not the loss of limbs or certain organs that renders an organism dead, but rather the fatal deviation of Vitals from their norm.
Life is not anatomy—it is the functioning of a system that maintains Vital parameters within an adaptive norm. Death occurs not from the loss of body parts, but from a catastrophic failure in the regulation of these parameters.
A Vital is a normalized quantitative measure of a life-critical physiological variable—a numerical characteristic of a specific type of condition critically important for sustaining life. Examples include concentrations of oxygen, carbon dioxide, glucose, and other indicators essential for survival. The state of a Vital is expressed on a scale from 0% to 100% of its possible physiological range.
A person loses an arm or leg in an accident. This is a severe trauma, yet they remain alive, conscious, capable of adaptation, and able to lead a full life.
The same person eats peanuts to which they are allergic. Anaphylactic shock develops: blood pressure (the “Blood Pressure Norm” Vital) plummets, airways constrict (the “Blood Oxygen Saturation” Vital), and heart rhythm is disrupted (the “Heart Rate Norm” Vital). Despite complete anatomical integrity, the system collapses—it cannot compensate for the rapid deviation of Vitals beyond the viable norm. Without urgent medical intervention (external support to restore Vitals), death ensues.
A lizard, escaping a predator, voluntarily sheds its tail. The tail continues twitching, distracting the enemy, while the lizard flees. It has lost a significant body part but remains alive and active.
The same lizard is caught in an unexpected frost. Its body cannot warm itself: body temperature (a key Vital for a cold-blooded animal) drops below a critical threshold. Metabolic processes halt, the heart stops. Life ends. Yet some animals in this state can be revived simply by warming them back to the functional norm of their Vitals.
A powerful hurricane breaks most branches and foliage off a tree. The tree appears maimed, but its trunk and root system remain alive. It gradually regrows leaves and continues living.
The same tree stands intact, but prolonged drought causes cellular water levels (the “Cellular Hydration” Vital) to fall and turgor pressure to collapse. Photosynthesis and nutrient transport cease. The tree dries out and dies.
These examples clearly demonstrate that “alive” is not a sum of organs, but a functioning regulatory system for Vital parameters (the Egostat). As long as this system operates and keeps Vitals within norm, the organism is alive—even if severely damaged. The moment regulatory control fails and Vitals exit the viable range, death occurs—even if the body appears perfectly intact.
Biology lacks a rigorous definition of “life.” However, a precise definition emerges directly from the foundational concept of the Homeostat—a system that actively maintains internal stability. The essence of life lies not merely in the presence of biological processes, but in the active functioning of a specialized regulatory system. Life depends directly on maintaining the Homeostat’s Vitals within norm. The Homeostat is not an abstract “capacity” (as in the classical notion of homeostasis), but a concrete system of mechanisms that dynamically regulates parameters depending on context (rest, exertion, etc.).
Cessation of the Homeostat’s function is directly associated with death. Absence of such a system indicates a non-living object.
For full functionality, the Homeostat requires adaptive behavioral styles (exploratory, feeding, defensive, replicative), which are activated to restore disrupted Vitals.
Definition of Life:
Life is the functioning of a Homeostat system—that is,
the active operation of a mechanism that maintains life-critical parameters
(Vitals) within an adaptive norm through basic behavioral styles (exploratory,
feeding, defensive, aggressive, replicative). Cessation of this functioning
equals death. Absence of a Homeostat indicates a non-living object.
This definition is deliberately inclusive of plants. Although their behavior is limited, they possess their own Homeostat system, regulating water uptake, gas exchange, etc., enabling environmental adaptation.
Thus, life is defined not by chemical composition or reproduction, but by a dynamic process of self-regulation aimed at preserving internal stability critical for existence.
This definition of Life serves as the foundational postulate upon which the entire conceptual architecture of the individual adaptive system is built. Life is the functioning of a Homeostat. This definition is primary and exhaustive within this framework. Death is the cessation of this functioning. Non-life is the absence of such a system.
This is an operational definition, enabling both theoretical modeling and experimental implementation.
A Single Cell Is a Living Organism
From the perspective of the Egostat concept, a single cell is a fully autonomous living being, possessing all necessary and sufficient attributes that define life.
The cell actively maintains numerous internal parameters within strict boundaries. Deviation of any parameter beyond its norm leads to cell death—that is, to the “death” of its Egostat. Key Vitals include:
These parameters are not passive characteristics but dynamically regulated Vitals. The cell continuously expends energy (ATP) to keep them in norm—this is the work of its Egostat.
The cell is not passive. When Vitals are disrupted, it activates specific “behavioral styles”—adaptive action programs to restore them. These are full analogs of the Homeocontexts seen in higher organisms:
The cell is also capable of “learning” and adapting based on experience:
Thus, a cell is not merely a “building block” of life, but an autonomous, self-sufficient individual adaptive system. It:
Therefore, by the strict definition provided in the Egostat concept, a single cell is a living being. Its Egostat functions exactly like that of a complex multicellular organism—with the only difference being the far lower scale and complexity of its world models. But the essence—ego-centric regulation of life parameters—is absolutely identical.
For a plant, as for an animal, life is defined by the functioning of the Egostat—not by appearance or anatomical completeness. A fatal failure in Vital maintenance (water, minerals, energy) is death.
The Egostat is “a homeostatic regulatory system possessing a set of life-critical parameters that it maintains in a normal state to prevent death.” Thus, “Egostat” is not a separate entity, but simply another name or more detailed description of the Homeostat system—whose functioning is Life. All other components of the adaptive system are parts or mechanisms of the Homeostat/Egostat:
When a cell becomes part of a multicellular organism, it does not cease to be a living being from the Egostat perspective—but its adaptive functionality becomes subordinated to a hierarchy. Its own Egostat continues to function, but its primary task is now not only to maintain its own Vitals but also to ensure the stability of the higher-level system—the whole organism. This is not cancellation, but evolutionary complication of the adaptive hierarchy.
Even within an organism, each cell maintains its internal Vitals: pH, ion balance, ATP level, osmotic pressure, temperature (within the organism’s overall temperature), and DNA integrity.
The cell employs its behavioral styles:
The cell possesses a DiffSigner: it adapts to stress (e.g., hypoxia) by altering gene expression.
Conclusion: at this level, the cell is a fully autonomous living being, as its Egostat actively functions.
However, within a multicellular organism, the cell loses some of its “freedom.” Its behavior is regulated by signals from the higher-level system—the organism:
Conclusion: the cell becomes a “vassal” in the Egostat hierarchy. Its own Egostat continues to function, but its top priority is now maintaining the Vitals of the higher-level system (the organism), even at the cost of its own life.
This is not an exception but a fundamental principle of complex adaptive systems: each hierarchical level is a full-fledged Egostat, but its adaptive function is directed toward maintaining the Vitals of the higher-level system.
When a cell joins an organism, it is not a loss of life but a transition to a new, more complex level of adaptivity. Its Egostat:
Thus, symbiosis is not the cancellation of individuality, but its transformation within a hierarchical Egostat structure. The cell does not die as a living being—it “becomes part of something greater,” where its adaptive function acquires new, more complex meaning. This is the essence of adaptive system evolution—from a unicellular Egostat to a superorganism.
To regulate the organism, it is necessary to continuously monitor its current Vital state in order to:
In the first case, the system simply optimizes its response to Vital deviations to select the most appropriate behavioral styles (Homeocontexts), which serve as contextual frameworks for choosing reaction types. This mechanism is evolutionarily optimized. For instance, in states of thirst or, especially, oxygen deprivation, activating reproductive behavior would be maladaptive.
The second case characterizes highly developed adaptive systems that select novel responses in new conditions—requiring post-action evaluation of whether the homeostatic goal was achieved or whether the action worsened the state. This is a complex mechanism requiring optimization of the waiting period for state change. If a person falls ill hours after a meal, the body may link the illness to that food. But if too much time passes (e.g., a day), the link weakens. If a person gets a headache after drinking alcohol, they can confidently attribute it to that specific action.
By distinguishing the positivity or negativity of behavioral consequences—and their intensity—it becomes possible to store behavioral rules for future use.
Evolutionarily optimized priority hierarchy is the task of the innate, baseline level of Egostat regulation. The system continuously monitors Vitals via sensory channels (internal receptors—interoceptors). When a parameter deviates from norm, the corresponding Homeocontext—an evolutionarily fixed behavioral program for restoring that specific Vital—is activated.
The DiffSigner operates within an innate priority hierarchy. Disruption of the “Blood Oxygen Saturation” Vital (hypoxia) activates defensive/aggressive Homeocontexts (panic, fight for air) and suppresses all others, including replicative (sexual behavior) or even feeding. This is not a choice but a rigid, evolutionarily refined scheme: without oxygen, death occurs within minutes, not days as in starvation.
The term “Homeocontext” itself emphasizes that behavioral choice depends on context—the current state of all Vitals. For example, hunger (disrupted “Glucose Level” Vital) in a safe environment activates exploratory/feeding Homeocontexts. The same hunger in the presence of a predator activates a defensive Homeocontext—food seeking is postponed until the threat is eliminated.
At this level, innate mechanisms predominate.
Learning through state differentiation provides consequence evaluation and adaptive model formation.
After any action (innate or novel), the DiffSigner analyzes changes in Vital states. If the action restores norm (e.g., eaten food raises glucose levels, quenching hunger), it is marked as positive and reinforced. If the action worsens the state (e.g., eaten food causes abdominal pain—disruption of the “GI Tract Integrity” Vital), it is marked as negative and suppressed.
The key function of this level of the DiffSigner is to produce a composite value representing the magnitude of state change, expressed in units most convenient for use by any component of the individual adaptive system. The most effective approach is a nonlinear approximation: small state changes yield high sensitivity, but as deviation increases, the response scale becomes less sensitive, asymptotically approaching a maximum limit.
Before a trial action, the organism’s current state is stored; after the action, state changes are monitored. This creates an explicit cause-effect sequence: reaction → consequences, which can be stored in memory as a behavioral rule.
But the DiffSigner evaluates not only internal states. If behavior elicits a positive or negative response (e.g., another animal’s reaction or even an inanimate object’s behavior), this immediately triggers a corresponding affective response, interpreted as the consequence of the action. In this case, two rules form:
This enables future selection of actions to elicit desired responses from external objects and assigns semantic properties to those objects based on their behavior in specific contexts (semantic memory). Such external responses can also serve as behavioral examples, allowing the organism to mirror (imitate) the behavior of others—even inanimate objects—for its own benefit in unfamiliar situations.
By observing another entity (living or even inanimate—e.g., a stone rolling down a hill) perform an action and obtain a result, an individual can copy that action in a similar situation, expecting a similar outcome. This is a powerful mechanism of learning without direct experience. Others’ behavior becomes a data source for learning.
Evaluation of external reactions (semantic memory and mirroring) represents a qualitative leap in adaptation—using the external world as a source of feedback.
Thus, although the DiffSigner at the second level of response is linked to conscious processes, it remains a purely mechanical, innate mechanism whose primary task is to provide, on demand, the organism’s current state on an exponential scale of values.
Optimization of the waiting period for consequences occurs at the level of consequence interpretation, through accumulated experience. For example, liver cancer after 20 years of alcohol abuse falls outside the temporal window for causal linkage, but information comparison can still lead to that conclusion.
The DiffSigner is not a subjective “experience” but an objective, innate, mechanical measuring instrument. Its task is to quantitatively assess changes in Vital states after an action. High sensitivity to minor deviations is critical for early threat detection. For instance, a barely noticeable drop in blood sugar or slight fever must trigger a strong alarm signal to activate corrective mechanisms before the state becomes critical.
Reduced sensitivity during large deviations—when the composite Vital value has already fallen far below norm (e.g., severe pain, shock)—prevents system “overload” from excessive signals and allows resources to focus on survival. There is a maximum “weight” for negative or positive effects. This is logical: death or orgasm represent peak significance, and “more deadly than death” is impossible.
The DiffSigner outputs a normalized, exponentially compressed assessment of the organism’s state change. This is a “universal language” understood by all Egostat levels—from cellular to cognitive—to which they are all tuned.
The DiffSigner is a universal innate mechanism for evaluating consequences, operating via strict algorithms with a nonlinear sensitivity scale. It establishes causal links both within the organism and in the external world, forming behavioral rules and semantic memory. It is upon this foundation that the entire edifice of complex behavior, learning, and ultimately consciousness is built. This is a clear example of the “circuitry of life.”
The evolutionary development of the individual adaptive system is optimized for each species, ensuring survival within the specific conditions of its habitat. The simplest organisms rely solely on reflexes and can adapt to new conditions only through mutations and hereditary changes. Moreover, the organismic regions upon which subsequent adaptive levels are built are protected from mutations, because altering their internal structure would be fatal to all higher-level adaptations built upon them. This creates evolutionary dead ends: once an organism passes a certain stage of adaptive refinement, it loses the ability to alter earlier stages. This is why a dragonfly—or even a monkey—can never evolve into a human.
Natural selection is the process of tuning the Egostat architecture to specific conditions. For each species, the following are formed:
In simple organisms (e.g., bacteria, protozoa, jellyfish), the Egostat is built almost exclusively on hardwired reactions. Their DiffSigner is primitive or absent—they do not learn within their lifetime. Their adaptation to new conditions is possible only through phylogenesis—mutations, selection, inheritance.
Conservation of Baseline Levels: “You Can’t Rebuild the Foundation”
“Organismic regions upon which subsequent adaptive levels are built are protected from mutations…”
This is a key point. According to the adaptive hierarchy concept, each new level is built upon the previous one. This is not merely a metaphor—it is a principle of structural and functional dependency.
Example: in multicellular organisms, the cellular Egostat (ion balance, pH, ATP maintenance) evolves first. Then comes the tissue level (cell coordination), followed by the systemic level (nervous, endocrine regulation), then the behavioral level (instincts, learning), and finally the cognitive level (thinking, consciousness).
If a DNA mutation disrupts a baseline mechanism (e.g., Na⁺/K⁺ pump function in cells), the entire “house” collapses—the organism dies at the embryonic stage. Therefore, genes encoding fundamental Egostat components (e.g., ion channels, core metabolic enzymes, neurotransmitter structures) are extremely conservative. They change very little during evolution, even between vastly different species.
Evolutionary changes occur primarily at the “periphery” of the system—sensory receptors, behavioral programs, and higher brain structures.
Evolutionary Dead Ends: Why a Dragonfly Will Never Become Human
“A dragonfly or even a monkey will never become human.”
This is a direct consequence of hierarchical dependency.
A dragonfly is an insect with a radically different Egostat architecture. Its nervous system is ganglion-based; it lacks any homolog of the cerebral cortex; its DiffSigner operates on entirely different principles. Its “foundation” (body plan, nervous system, metabolism) is incompatible with the architecture required for human-level consciousness.
A monkey is closer to humans, but still has insurmountable constraints. Its brain has a different minicolumn balance, lobe proportions, neuronal metabolic support, and innate social Homeocontexts. Human evolution from a common ancestor with monkeys took millions of years and required sequential restructuring of every Egostat level—from genes regulating brain development to social instincts.
Attempting to “reprogram” a dragonfly or chimpanzee into a human is not an upgrade—it is trying to build a skyscraper on a shed’s foundation. The architecture cannot support it. Evolution cannot “roll back” baseline levels and rebuild them from scratch—this would be fatal to the existing organism.
Conclusion: Evolution Is a One-Way Path with Foundation Conservation
Thus, humans are not the “highest stage” of evolution, but one of many possible, highly specific, and irreversible paths of Egostat development—based on a unique combination of conserved baseline mechanisms and novel high-level adaptations (language, abstract thought, culture).
A human cannot emerge from a dragonfly because evolution does not build from clay—it builds from what already exists—and the foundation cannot be changed without collapsing the entire structure. If humans acquired the potential for rapid self-improvement at some early stage—for example, by developing a Vital called “Dissatisfaction with the Status Quo”—while another species long ago passed that stage and it became mutation-protected, then the desired mutation can never arise there. Of course, it is not just one, but a multitude of mutations that defined humanity.
Living beings always base their adaptivity on ego-centric regulation (fornit.ru/70018).
Since the “Individual Adaptive System” (Egostat) is by definition a Homeostat system, and “Life” is defined as the functioning of this system, there cannot exist an individual adaptive system that is not a living being. This would be a logical contradiction: a system defined as maintaining life (homeostasis) cannot exist outside of life itself. It is precisely what makes an entity alive within this conceptual framework.
The existence of altruism (even unconscious manifestations in many animal species), self-sacrifice, and humanistic ethics does not contradict the ego-centric foundation—it is defined by it. Significance acquires the highest competitive value during model formation because the system (Egostat) learns to maintain its Vitals not only through direct physical action but also through complex world models in which significance extends beyond “self” to include “others,” whose state critically affects one’s own survival and well-being.
Example: Mother and Child—where the biological imperative is the highest significance.
For any species, the primary Vital is species survival, realized through the
reproductive Homeocontext. For a mother, this is directly tied to the survival
of her genetic material—her offspring.
The mother’s brain forms a model in which the child’s state
acquires significance equal to or exceeding that of her own physical comfort or
safety. The child’s pain or threat is perceived as an extreme threat to her own Vital
(reproductive success).
She sacrifices herself to protect her offspring from predators or gives up food
to feed it. This is not “altruism” as renunciation of self-interest, but the most efficient strategy
for maintaining her primary life parameter—reproductive
success. Her Egostat has learned that investing resources in offspring is the
best way to preserve herself (genetically).
Example: Social Animals (wolves, primates)—“We” as an extension of “I.”
For social animals, individual survival directly depends on group/stay health.
A lone wolf or monkey expelled from the group is doomed.
Through state differentiation (DiffSigner), the Egostat learns that group
well-being is critically important for maintaining its own Vitals (safety, food
access, reproductive opportunities). The individual forms a model in which the “group” becomes part of its “extended Self.” Violating social norms (e.g., unprovoked aggression against kin)
leads to expulsion—Egostat death.
The animal shows “selflessness” by defending kin, sharing food, or submitting to the leader. This
is not a selfless act but an investment in system stability, on which its own
life depends. Ethical norms within the group are adaptive rules formed by
Egostats to optimize collective survival.
Example: Human Humanism—abstract significance as a survival tool in a complex world.
Humans are hyper-social beings. Their survival and well-being depend on the
functioning of vast, complex social systems (family, community, state, global
society).
The human Egostat, equipped with powerful thinking mechanisms, forms abstract
models. It learns that societal stability, safety, and prosperity are
fundamental conditions for maintaining its own Vitals at a high level. Concepts
like “justice,” “empathy,” and “humanism” become tools for predicting and managing the social environment.
Witnessing another’s suffering causes discomfort because the Egostat interprets it as a
sign of system dysfunction—ultimately threatening the self.
People donate to charity, risk their lives to save strangers, or fight for
others’ rights. This is not a rejection of egoism but its highest, most
complex form. The individual acts in the interest of “humanity” or “the common good” because their Egostat has determined that, in the long term and at
a global scale, this is the most reliable way to ensure their own safety,
stability, and satisfaction. Humanism is an adaptive strategy that benefits the
individual within a complex social ecosystem.
Conclusion: Altruism is not a rejection of ego-centrism, but its evolutionary development and complication. The Egostat, striving for maximum efficiency in homeostasis maintenance, learns to include external objects (children, kin, society) in its world model as critically important elements whose state directly affects its own Vitals. The significance of these objects rises to a level where their protection and well-being become more important to the Egostat than short-term individual gains—manifesting as self-sacrifice and ethics.
Significance is the sole regulator of the adaptive system. It is a positive or negative evaluation accompanying the organism’s current state. Initially, it arises from Vital states: Norm, Deviation from Norm (Bad), and Return to Norm (Good). The significance of air deprivation exceeds that of water shortage, which in turn exceeds that of food shortage—enabling competitive selection of response priority (motivation).
The adaptive value of the Norm state is that no effort is needed for life support—only the innate Vital “Dissatisfaction with the Status Quo” might prompt proactive action. In the Norm state, time and capacity are freed to optimize impressions that could not be processed during busier periods. Special dream phases during sleep are also allocated for this.
The value of Bad is the necessity to activate an appropriate behavioral context to begin restoring norm.
The value of Good is to prevent immediate cessation of need-satisfaction as soon as an opportunity is found, but to continue the process until norm is achieved.
Empirical research data (fornit.ru/68564, fornit.ru/7317) show that in reflex-level systems, the pleasure center activates not upon return to Vital norm, but long before. When an object capable of restoring norm is found via unconditioned reflexes, search conditions cease, and action programs for need satisfaction must be initiated. This requires a context that sustains need satisfaction until receptor activity subsides.
If an animal is thirsty or hungry, exploratory behavior—a chain of instincts—activates to locate a source. When the source is found and verified as suitable, the search ends. At this point, consumption could stop—but instead, the pleasure center strongly activates, and the animal continues to replenish the need. Vitals are not yet restored, but pleasure is already present and persists until Vitals are normalized (and even beyond, if consumption is too rapid). Then the pleasure center deactivates.
The animal repeats the action as long as it receives pleasure.
In this scheme, the pleasure center (“Good”) is not a direct antagonist of the “Bad” center. Receptors signaling deviation from norm deactivate as the need is satisfied, but this occurs at different rates depending on the need type. To deactivate hunger receptors, blood must be saturated with glucose—which happens only after prolonged digestion. Gonadotropic receptors, however, may deactivate quickly upon fertilization signals.
Nevertheless, the baseline states Bad and Good are antagonistic within the organization of basic contexts and cannot be active simultaneously. This implies mutual inhibition—not direct (lateral), but via a more complex selection mechanism.
This appears contradictory. However, the data clearly show that there are multiple distinct pleasure centers, differing in both brain localization and functionality (fornit.ru/68565). It follows directly that the overall “Good” state is a composite of these centers. What serves as the three baseline states is not what ensures complete need satisfaction.
Evolution has produced context-dependent chains of unconditioned reflexes that branch based on situational specifics. The basic motivator need only activate the trigger stimulus of such a chain to initiate behavior. For example, when gonadotropic drive crosses a threshold (a convenient scale for illustration), sexual behavior activates—the individual displays attraction. Upon seeing a potential mate, it immediately evaluates acceptability (recall: this is an example of a complex branching instinct). If the mate has a pimple on the nose, the individual may protest—and only a strongly advanced drive or alcohol might override this obstacle.
This sequence of unconditioned actions branches based on conditions evolutionarily encoded as hereditary mechanisms. The process is continuously refined. Only the newest branch tips are subject to mutations, and occasionally a chain extension emerges that confers competitive advantage. This is the evolutionary mechanics of instincts, based on mutations. Demonstration program of evolutionary basics: fornit.ru/evolution.
The states Norm, Bad, and Good are competitively antagonistic and constitute the most basic context—the foundational significance determining behavioral direction.
When a specific behavioral style is activated, only reactions appropriate to the current Vital state become possible. Each style, as a context of significance, colors perception with the significance inherent to that Vital state.
Positivity indicates the degree of success in homeostatic regulation associated with that Image; negativity indicates the degree of state deterioration.
Just as a school grade from 1 to 5 suffices to reflect performance, Significance requires no high-precision scale—a range from –10 to +10 is fully adequate. The scale is nonlinear: small values are most informative for response selection, while values near the maximum become increasingly compressed, with +10 representing an asymptotic limit of value growth.
This is an extremely simple yet deeply functional metric that unifies evaluation of everything—from physiological parameters to abstract Images—within a single priority scale governed by the Egostat.
Significance scale: –10 to +10. This unifies all experience—physiology, emotions, thoughts, actions—into one evaluative framework.
Accompanying various elements of activity during adaptive functioning, Significance imparts stimulating or avoidance-oriented directionality (fornit.ru/66643). This becomes especially important in understanding models (Semantories), abstractions, and rules. All adaptivity in novel conditions uses element Significances as the foundation of its functionality.
The scale allows comparing “apples and oranges.” One can objectively decide what is more important right now: quenching hunger (+5) or avoiding humiliation (–7).
Significance is the “glue” that binds experience into a world model (Semantory). In semantic memory, objects acquire semantic properties—Significance. “Fire” = –8 (danger), “Mother” = +7 (safety, resource).
Higher cognitive functions are manipulations of Significances within a virtual model. Consciousness calculates: “If I do A, Significance becomes +3; if B, –5. I choose A.”
All Egostat activity—from cellular level to consciousness—boils down to maximizing positive Significance and minimizing negative Significance. This is not an emotion or subjective experience, but an objective, quantitative metric of adaptive value for any state, action, or Image. This is the essence of the “circuitry of life.”
An Image is a functional structure designed to recognize a unique combination of input states. Therefore, each Image possesses its own uniqueness, which can be expressed as a numeric identifier (ID) enabling unambiguous addressing within the system.
The concept of an Image is conventional—it is introduced to isolate structures that serve as the fundamental components of adaptive processes. These processes do not operate on the full internal complexity of such structures but only on their unique identifiers: simple numeric values that stand in for the entire distinctive composition of the Image. Thus, Images become equivalent and universal elements manipulated by adaptive mechanisms.
Ancient brain structures contain no Images, as they are built on hardwired, genetically predetermined connections—like a radio circuit or a machine control schematic. Images are special elements not characteristic of traditional electronic circuitry; they possess unique collective properties.
Images are relatively recent evolutionary formations, built upon a hierarchy of complexity in which combinations of increasing sophistication emerge from an initial array of the simplest primitives, thereby incorporating all the complexity of previous levels. This tree-like inheritance allows simpler Images to be extracted from complex ones—or the final, most complex Image to be treated as a symbolic representation of the entire preceding hierarchy, with the system operating solely on its ID.
Images emerged within the neocortex during the formation of level-specific recognizers in their critical developmental period. First, an array of the simplest primitives is formed; then this process is strictly terminated (because everything built upon these primitives must remain structurally invariant). This resembles the protection against mutations seen in fully developed elements that serve as the foundation for subsequent structures. Images enable the adaptive system to maximally align with the specific environment in which it develops. Images arise during ontogenesis.
An Image is not a picture or a “thought,” but a computational module whose task is to detect a specific, unique combination of input signals. These signals may originate from sensors (vision, hearing, touch), internal Vital states, the activity of other Images, or even abstract rules.
Image activation (its ID) occurs upon detection of its unique combination of input features. This activation is an event to which the system can respond. Each Image is tuned to one—and only one—distinct input configuration. This allows the system to discretize the infinite diversity of the world into manageable, addressable elements.
Instead of processing a massive array of raw data (pixels, hertz, neurotransmitters), the system operates with compact numeric codes. This radically simplifies and accelerates processing.
The ID of the “apple” Image, the ID of the “threat” Image, and the ID of the “mathematical formula” Image are all functionally equivalent for psychic-level adaptive mechanisms. The system can compare Significances, link Images, and make choices between them without regard to their physical or semantic nature.
The ID enables unambiguous addressing of any experiential element—critical for forming associations, memory, and behavioral control.
Older brain structures, the autonomic nervous system, and cellular mechanisms operate on the principle of rigid circuitry: Signal A always triggers Response B. There is no abstraction, no ID, no flexibility. It is like a relay in an electrical circuit: current flows → light turns on.
At the level of abstractions, rigid structures are represented symbolically by Images. Signals first pass through “detectors” (Images) that output their ID. Decisions are then made at the ID level. This allows the same input signal (e.g., the sight of a snake) to trigger different responses depending on context (the “danger” Image vs. the “pet” Image).
Simple Images (e.g., “45° line,” “1000 Hz tone”) form first. More complex Images are built upon them (“angle,” “musical note C”). Then even higher-level constructs emerge (“face,” “word,” “idea of justice”). Each level uses the ID of the previous level as its own “inputs.”
Images enable the system to:
Without Images, there is no consciousness. Consciousness is the process of manipulating Image IDs—linking them, evaluating their Significances (fornit.ru/70455), and planning actions based on them. Images are the very “language” spoken by the Egostat when it transcends simple reflexive circuitry and begins to think.
At the instinctual level, reaction chains were tied to situation recognizers built on rigid, innate schemata. These did not permit adaptation to novel conditions. “Unconditioned reflexes” (Genoreflexes) followed a hardwired stimulus–response scheme and, during ontogenesis, could only be supplemented by cloning for synonymous stimuli (CloneReflexes, formerly called “conditioned reflexes”).
With the emergence of the neocortex, two hierarchies arose during ontogenesis:
When an instinct was triggered, corresponding perceptual and action Images were activated, establishing a link that transferred the Genoreflex into the domain of voluntary regulation based on Images. Now, these primitive “proto-reflexes” (reflections of unconditioned reflexes) became attached not only to ancient activation structures but also to terminal perceptual Images that were co-activated alongside those ancient structures. In such cases, the activity of the more evolutionarily advanced response suppresses the ancient Genoreflex. This principle—where a newer, more sophisticated mechanism overrides an older one—operates across all evolutionary levels of adaptive functioning.
It now becomes possible to intervene in the reaction process under novel conditions, modifying the response linked to an Image to achieve a desired outcome. This is the path to conscious voluntariness.
The blocking mechanism—where a newer analog response inhibits an older reaction—can now be used to halt an action chain at any arbitrary step, reflect, and decide how best to proceed given the new contextual components.
Suppose a habitual action chain is
activated:
“Go to fridge → open door → take pastry → eat.”
At any stage (e.g., “take pastry”), the system can activate the Image of the current situation (“I’m on a diet,” “This is the last pastry for guests”) and the Image of the desired outcome (“be slim,” “don’t offend guests”).
These Images, carrying high Significance (e.g., +7 for slimness vs. +3 for pastry taste), initiate a competing action Image (“put pastry back”).
The blocking mechanism engages: activation of the “put back” Image suppresses execution of the next step in the original chain (“eat”).
The system “thinks”—that is, in Egostat terms, it activates and weighs alternative action Images and their predicted Significances before selecting the optimal path.
Moreover, this architecture also enables mirroring—not only of one’s own instincts but also of others’ experiences through observation. This is an evolutionary breakthrough underlying social learning, culture, and ultimately civilization. A mechanism originally created for self-behavioral control is now repurposed to decode and copy the behavior of others.
An individual observes another’s action (e.g., a monkey sees someone washing a potato in a stream).
Their perceptual Images activate: “hand,” “potato,” “water,” “hand moving in water.”
Their action Images associated with these objects activate in parallel
(mirror-like). The individual mentally “enacts” the action.
The system records the causal link:
Action “wash potato” → Result “clean potato” (Positive Significance +5).
This link is stored as a new behavioral rule—even if the individual never performed the action or received direct reinforcement.
Next time the individual holds a dirty potato, the action Image “wash in stream” activates, and they perform it—without trying less effective alternatives.
The key element is “mirror symmetry”: the system uses the same action Images both for performing and for understanding others’ actions. When you see someone smile, the same neural (or functional) structures activate as when you smile yourself. This enables you to “put yourself in another’s place” and predict their intentions.
The result is exponential growth in adaptivity. The individual no longer needs to rediscover everything through trial and error (which can be fatal). They can adopt the successful experiences of others—including previous generations (culture). This is the foundation of knowledge transmission, language, and technology.
The critical period for specializing the most complex Image recognizers is also limited; entirely new Images differing by novelty cannot keep emerging indefinitely. This would require continuous neurogenesis, but adult neurogenesis has not been observed in the tertiary associative parietal cortex. However, minor differences are still detected because simpler recognizers respond to them. You recognize a face even if the photo has a scratch, is slightly blurred, or a strand of hair lies differently. The scratch, as a separate object of attention, is also recognized. The system extracts significant patterns (relative positions of eyes, nose, mouth) and filters out insignificant deviations.
First, the face Image is recognized (e.g., “this is my grandmother”—or even a specific grandmother if perceived during the critical developmental period). Only then does attention shift to finer details. At the level of conscious perception, a combination emerges: the recognized complex Image plus its qualifying details, and this composite is stored in Historical Memory.
Thus, upon subsequent perception of the same combination, recognition and understanding occur via retrieval of a Historical Memory “frame.” This requires that, during its specialization, the Historical Memory frame neuron preserved connections both to the complex recognized Image and to the detail Images activated by conscious attention.
Since memory is reconstruction, not exact recording, the brain can easily “add” template-based details to a memory that were not actually present. For example, if you know your grandmother’s kitchen always has a red kettle, you might “remember” it in that scene—even if it wasn’t there that evening.
Recognition is faster and more accurate in the same context in which the Image was originally encoded. It’s easier to recognize a colleague in the office (context matches the memory “frame”) than to encounter them unexpectedly on a beach in another city (different context requires more time for the template to “adjust”). You instantly recognize “a middle-aged man,” but full identification occurs consciously through Historical Memory retrieval: first, frames with maximal contextual match are recalled (logically—if no exact match is found, the context is broadened). However, if the face was perceived during the critical period for complex Image formation, it is recognized immediately.
If the final critical period were not time-limited—if new terminal Images kept emerging via continuous neurogenesis—the neocortex would run out of physical space (already forced into folds to compensate). Therefore, evolution found another solution for refining Images: combinations of multiple Images stored in Historical Memory.
Long-term consolidation of a memory frame takes about half an hour. If, during the awareness process, attention was directed to details (activating simpler Images in other branches of the condition tree), those details also become linked to the frame’s neuron. Upon frame activation, the full ensemble of Images is recalled. This leads to a more complete informedness, essential for selecting an appropriate response.
Internal attention shifts occur within the context of the still-held (fornit.ru/70743) actual Image, preserving thematic continuity: “I’m still thinking about this.” It suffices to initialize the Infocontext using only the new external actual symbol. The array of active Images serves as external working memory for the awareness process—retaining what was attended to. These activations are minimal after sleep and gradually accumulate again.
Because “re-recognition” via different details in varying situations can yield different meanings, the effect of subjective interpretation arises: the same Image may carry entirely different significance at different moments (fornit.ru/70779). Interpretation depends on whether the Understanding Model contains a match for the suggestive cues in the stimulus. Artists and designers—who constantly refine their ability to “see from hints”—possess such rich Understanding Models that they instantly perceive details invisible to others. In inkblots, they discern countless familiar forms.
In terms of qualia, Understanding Models generate the feeling of personal experience, the flow of thought, and awareness itself. Merely glancing at something familiar imbues it with meaning and understanding.
In software implementations of artificial living beings, neocortical space constraints do not exist, and neuron emulation is unnecessary. In such cases, if the terminal node of a tree branch is not activated, a new Image is immediately saved and used—without offloading part of recognition to Historical Memory retrieval. Thus, the problem of recognition difficulties in novel contexts disappears. The use of Image IDs ensures extremely compact storage, allowing new Images to be saved without limitation.
Building on the concepts of the basic attributes of living beings, we now proceed to the fundamentals of their circuit-based implementation.
In contrast to the rather chaotic “patches” of ancient brain structures, the emergence of the neocortex introduced a clear tree-like hierarchy—the Dendrarch. This structure did not arise from nothing; its foundation consists of three baseline states: Norm, Bad, and Good. These are not emotions (fornit.ru/70312), but systemic markers signaling the status of Vital parameters. They establish the primary context for all subsequent perception and action.
Each of these states is accompanied by a set of behavioral styles (feeding, defensive, exploratory, etc.), activated by the DiffSigner—a mechanism that competitively identifies the most relevant behavioral styles (Homeocontexts) for the current situation based on Vital values, which determine the organism’s overall Significance state.
This defines the root of the tree-like structure of Significance contexts: at the base are the three fundamental states, each branching into the Homeocontexts possible within that state. Each Homeocontext, in turn, becomes a “parent node” for subsequent levels of the object hierarchy in the neocortex.
Branches develop through a hierarchy of perceptual primitives—increasingly complex Images built from simpler feature combinations.
The formation of the Dendrarch occurs during strictly defined critical periods of ontogenesis. At each hierarchical level, the system “expects” specific types of sensory input, enabled by the maturation of a corresponding neocortical layer.
For any new conditions encountered during the critical period of Image specialization, a new Image of that hierarchical level emerges within the context of the currently active branch—one already activated by known perceptual features. If no such matching branch exists—that is, if the current perception is so novel that no Image at this level of complexity was activated during the prior critical period—then nothing can form at the current level either. This constitutes novelty for which no basic perceptual Images yet exist.
Classic experiments by David Hubel and Torsten Wiesel (Nobel Prize in Physiology or Medicine, 1981) demonstrated this: kittens deprived of vertically oriented visual stimuli during a critical developmental window permanently lost the ability to process vertical lines. The absence of this most basic perceptual primitive prevented the entire cascade of higher-order Images that would have depended on it. Adaptive mechanisms could no longer use not only the vertical-line primitive itself but also the vast array of secondary Images built upon it. Behavioral responses had to rely solely on existing primitives, causing the kitten to repeatedly bump into vertical objects like mops—until it learned to infer their presence indirectly through other cues.
The tree structure is the most natural and efficient for search and categorical representation: earlier (closer to the root) nodes function as general categories (e.g., “animal”), while deeper nodes represent specific instances (“my cat Murka”). This allows the system to filter information rapidly—first identifying the broad category, then “descending” the branch only if detail refinement is needed.
Moreover, the tree enables precise novelty detection when a branch is incompletely recognized (assuming a fixed number of hierarchy levels—i.e., all branches ultimately contain the same number of nodes). This provides a significant advantage over other hierarchical organization methods.
Starting from unimodal perceptual primitives, each new level integrates increasingly complex feature combinations, with terminal nodes unifying all sensory modalities into a single final Image. The number of such levels can be evolutionarily optimized per species. More levels allow more intermediate categories but slow the growth of Image complexity, creating transitional states.
Every Image—regardless of complexity or hierarchical position—has a unique identifier (ID), corresponding in biological implementation to a synaptic address. This allows the system to treat any experiential element as a distinct, addressable entity.
For example, the “fire” Image can be linked to its Significance (–8), to the motor program “jump back,” and to Historical Memory of a childhood burn. All this information is bound to a single ID, forming a rich, multidimensional “semantic packet.”
The “apple” Image is not merely a red circle—it is a combination of visual shape, smell, taste, texture, memories of past experiences, and even the current feeling of hunger.
The tree-structure principle applies not only to perceptual Images but also, for instance, to Historical Memory frames and other hierarchical systems, offering substantial advantages. A Historical Memory frame of an awareness moment includes the full perceptual Image, which can be recalled via the ID of the terminal node in the condition-context tree. Tracing from this terminal node back to the root reveals the emotional context (active Homeocontexts) and the baseline homeostatic state. To this Image are attached elements resulting from the awareness process—from the semantic Significance of the Image in those conditions to stored behavioral rules. Recollection is the activation of this node, which “unfolds” all associated information.
The Dendrarch, together with Historical Memory, is not merely a storage system for Images—it is a dynamic map of the individual’s world and experience, built on principles of efficiency, adaptivity, and hierarchical organization. It is the skeleton upon which the full complexity of conscious and unconscious life is “strung,” enabling the Egostat to navigate infinite stimulus diversity and make adaptive decisions.
Numerous attempts to define novelty (including E. Sokolov’s work: fornit.ru/5304) have encountered difficulties in circuit-based implementation. Novelty can be either functionally useful or irrelevant. Indeed, novelty exists in every perception (“you cannot step into the same river twice”)—it is a kind of “noise” around a confidently recognized core. Simple comparison of old and new activation profiles would detect precisely this noise.
However, for adaptive functionality, what matters is novelty that directly affects the response: without it, one reaction is appropriate; with it, a different one is required.
A simple and functional solution emerges from the activation pattern of the condition tree. If a terminal branch (with a fixed number of nodes) is fully activated, there is no significant novelty, and the terminal Image can be confidently linked to a response. Henceforth, any activation of this branch indicates sufficiently familiar conditions, allowing a habitual response.
Minor, non-disruptive variations in component features—“noise”—are entirely ignored.
If, however, the terminal node remains unactivated, the response becomes uncertain, triggering hesitation. The system can now react only at the level of the last activated node (as a category), but that category may contain many branches, making selection difficult. This necessitates engaging a specialized algorithm—the awareness process—to find an appropriate solution.
Such novelty directly triggers an innate reaction: the stimulus with the novel component is held in working memory and connected to the priority attention channel (Aten).
The core of this orienting reflex is older than the neocortex (fornit.ru/68305), but it has evolved specialized functionality involving Images, their novelty, and the Significance of the behavioral context.
If our sensory organs were static—capturing fixed “pictures” and refusing new input until processing finished—orienting would be simple: the reflex would activate only when a tree branch failed to complete. But we are dynamic: our eyes saccade across the visual field, and the external world is constantly changing. This creates fluctuating activation patterns—branches alternately fully and partially activated.
This raises a critical problem: which branch should capture the single priority attention channel for interpretation? Awareness is a complex process, and there is only one such channel.
In the artificial Beast system, a simplified orienting reflex algorithm is implemented due to highly constrained perceptual dynamics.
Adaptively significant novelty is the boundary separating reflexive response from novelty processing for adaptive reaction formation. In lobotomy, the latter system ceases to function, leaving only reflexes. The subject exhibits only habitual responses—so well-tuned that their unconscious nature is hard to detect. Only novelty reveals the deficit: the response fails to adapt, leading to unexpected outcomes. Habitual behavior is optimal—unless novelty appears.
Detection of negative Significance is more critical for survival than positive, reflected in the brain’s disproportionately larger neural substrates for negative valuation versus reward centers. A habitual reflex in novel conditions may yield unexpected consequences. Hence, a vast portion of higher animal brains is dedicated to processing significant novelty. Negative Significance is detected with higher priority—an evolutionary imperative: avoiding death is more urgent than gaining benefit.
Novelty lacking sufficient Significance, or with undetermined Significance, does not attract attention (it is simply unnoticed)—just as Significance with zero Novelty (i.e., well-learned conditions requiring no reinterpretation) goes unattended. This aligns with the formula for attracting conscious attention:
Image Actuality = Novelty × Significance
Note: Novelty ≠ Unexpectedness. Unexpectedness can be reflexive (flash → blink), whereas Novelty applies only to what requires conscious reinterpretation.
Novelty is a “blank spot” on the adaptive world map—a signal that the model must be updated. At the reflex level, there is no Novelty—reflex execution requires no reinterpretation.
It becomes clear that many strategies exist for detecting adaptively significant novelty, and the chosen method fundamentally shapes how novelty is processed to select a response. As the saying goes: “As you name the ship, so it will sail.” The origins of the orienting reflex suggest that across species inhabiting diverse environments, unique mechanisms for detecting adaptively significant novelty evolve—but all converge on selecting the most actual among all active novelty-significance combinations. This winning actuality is held for processing and connected to the priority attention channel.
The terminal (most complex) active Image—whether perceptual or mental—currently in the focus of the priority attention channel is the Actual Stimulus. It possesses the highest Novelty × Significance product. Logically, it must be the most actual among all active Images.
The Orientant triggers the orienting reflex—an innate program for reorienting attention and sensory resources toward a source of uncertainty with high Significance.
Images monitored for priority attention include both perceptual and mental Images from the interpretation process. Thus, the priority attention channel can switch from external to a more actual mental Image. Moreover, when deeply engaged in important interpretation, the channel raises its switching threshold to prevent distraction—but maintains a “sentinel mode” that still allows interruption by a highly actual new Image.
The orienting reflex is an innate mechanism selecting the most actual stimulus from all active perceptual (and mental) Images. It ensures:
If a mental Image is most actual, it does not create a new hippocampal reverberation but instead organizes the main interpretation cycle in the frontal cortex, relegating others to the background (the insight phenomenon). Thus, the original perceptual stimulus remains held in the hippocampus (preserving working memory of the topic), while awareness may shift through several more actual mental Images, each updating the informedness picture—which serves as context for the next interpretation step.
I.P. Pavlov described an animal’s reaction to a novel, sudden stimulus as the “What is it?” reflex or Orienting Reflex.
Aten provides the link between awareness processes and sensory perception or mental activity. Awareness mechanisms have evolved into a vast hierarchical system for increasingly deep and efficient processing of the actual stimulus—requiring, in humans, the prefrontal cortex.
According to A. Ivanitsky’s model, sustained stimuli in hippocampal feedback loops connect to the frontal lobes, with the orienting reflex selecting the most actual Image from all candidates. However, the frontal cortex also contains multiple reverberating loops—only one serves as the main awareness iteration, while others are interrupted or completed interpretation cycles (the unconscious). Within one such loop, ongoing processing may reveal extreme Significance—even if the stimulus was already in the priority channel—making this mental actuality itself an Orientant that captures attention. This is known as insight or illumination: a previously unconscious cycle becomes the main one and enters awareness.
Without Aten’s attentional focus on a stimulus (pain, image, thought), that stimulus is not felt—it produces no subjective experience—because attention is fully allocated to a more important stimulus. Thus, Avicenna (Ibn Sina) once cured a prince’s abscess by having him play chess with his favorite opponent; so engrossed was the prince that he felt no pain during the operation.
Animals simpler than fish lack an Aten and therefore have no sensations—they react purely reflexively. Yet some of these animals (e.g., bees, wasps, ants, cockroaches) possess perceptual systems as sophisticated as, or even superior to, humans in certain modalities. They exhibit complex reflexive responses to perceptual elements, contextually modulated—functionally no worse than humans in this regard.
In lobotomy (for which Moniz received the Nobel Prize in 1949)—once widely practiced—the Aten mechanisms are damaged. Patients retain acquired reflexes but lose sensations and subjective experience. Similarly, certain psychopathologies impair Aten functionality, rendering individuals functionally indistinguishable from advanced insects: their perceptual systems identify the most actual stimulus but fail to connect it to the prefrontal cortex, losing all psychic functionality (fornit.ru/70546). Such people act on autopilot via habitual responses. Externally, they may appear normal—which is why lobotomy was practiced—and even seem relieved of mental disorders.
Thus, it is evident that sensations arise only upon Aten activation. One part of the brain continues to function like an insect’s; another part consciously processes the most actual stimulus—producing sensation. Aten’s decisions override older reflexes: when conscious attention focuses on a stimulus, associated reflexes are blocked, and Aten’s decision governs action.
We can perform habitual actions with effortless grace (dancing, playing violin, walking a tightrope at great height)—but the moment we consciously attend to how we do it, the smooth execution falters. We become clumsy—balancing awkwardly on a rope or even on stones along a riverbank. I once knew someone who, in dangerous moments, would intensely focus all attention and intellect—and invariably fall into the water, even on simple stones (fornit.ru/1133).
If a cat suddenly pondered what it was doing while confidently walking a thin branch toward a bird’s nest, it would lose balance, fall, and hang in terror—reliving the event. But cats are highly conservative; they rarely reflect unless absolutely necessary.
Humans, too, can perceive without interpretation—simply registering current Images without thought—but this is difficult because humans usually maintain an ongoing experiential theme or goal, which inherently prioritizes certain stimuli. However, through meditation that clears this context, one can achieve contemplative thoughtlessness.
In early development, a child lacks Aten and possesses only insect-like perception and reflexes. The prefrontal cortex then matures, forming the Aten system—but initially, Images entering it carry no Significance, as there is no prior experience with them. Such Images convey no information and thus produce no sensation (fornit.ru/830). Only well-understood Images with verified Significance impart definite meaning to perception (fornit.ru/70455)—and thereby begin to be felt as that specific Significance to the self. This Significance—understanding an Image’s properties and possibilities across contexts (fornit.ru/69260)—defines the “I” in relation to that Image, because all Significance is evaluated ego-centrically, from the perspective of interaction with the self (fornit.ru/70018).
Our sensation is the understanding of the meaning (Significance) of the current actual stimulus (external or mental), which forms the current “Center of Personal Activity” (fornit.ru/70640)—that which drives consequences in decisions about responses (target mode: fornit.ru/68516) or refines the Significance of current actual stimuli (passive mode: fornit.ru/68279).
Thus, we conditionally divide the individual adaptive system into two main parts with distinct adaptive functionality: reflexes and interpretation of the actual. Aten is not merely “focusing”—it is the key switch between the reflexive world of insects and the world of conscious experience.
Aten is the sole “narrow passage” through which information enters the sphere of conscious experience. Without Aten activation, a stimulus—no matter how intense (pain, bright light, anxious thought)—does not become a sensation. It is processed only at the reflexive or unconscious (background) level.
We can perform complex actions virtuosically “on autopilot” (dance, play an instrument, walk a tightrope)—this is the work of refined NoReflexes.
But the moment we deliberately direct Aten to the execution process (“How am I doing this?”), the automatism breaks. Conscious control interferes, blocks the smooth operation of the reflex, and we become clumsy. This is not a flaw but a feature: Aten is designed for solving novel problems, not for managing refined programs.
An infant possesses only sensory perception and basic reflexes. Aten is not yet formed. The world is perceived but not consciously experienced. Images lack assigned Significance—they are “empty.” Thus, the child does not fully “feel” the world—it merely reacts to it.
As the prefrontal cortex develops, the Aten channel forms. The first Images entering it are non-informative—they lack history and Significance links.
As the child accumulates experience, their Semantory fills. Images acquire context-dependent Significance. Now, when such a “charged” Image enters Aten, it evokes a sensation—an ego-centric evaluation: “This is good/bad/interesting/dangerous.”
It is precisely through the lens of Significance assigned to Images in Aten that the sensation of “I” emerges—the center from which evaluation originates. “I” is not a substance but a function of ego-centric evaluation occurring in the Aten channel. Every conscious experience is “I” interacting with the world through the lens of Significance.
A reflex is an unconscious program of response (external motor or internal mental) triggered by a unique combination of perceptual features. It is not merely a reaction to a stimulus, but a reaction to a context recognized by the system.
This aligns with I. Sechenov’s definition: “A standard responsive reaction of the organism to external influence, mediated by the nervous system,” but clarifies the role of context and emphasizes that reflexes are not part of awareness, though the mechanisms of awareness themselves are innate reflexes.
Sechenov wrote: “All acts of conscious and unconscious life, by their mode of origin, are reflexes.” However, some adaptive mechanisms respond not only to external (or internal sensory) inputs but are activated by a specific structure of informedness (B. Baars, G. Tononi, D. Dubrovsky), which serves as the context guiding the direction of awareness—just as external context guides reflex selection. The organization of such mechanisms is not a reflex, but serves to form new reflexes for novel contextual components through the dynamics of the awareness process.
A reflex is triggered by a unique combination of perceptual features (recognized by a context Image). It may be:
All Sechenov-style acts (stimulus–response in context) are reflexes—but only after they are formed. The formation process itself is not a reflex but a meta-process. In innate reflexes, this is genetic predisposition realized during ontogenetic maturation. In novel ontogenetic reflexes, it involves forming new connections (e.g., with perceptual primitives, repeated stimuli, or cerebellar circuits) or constructing responses via awareness processes.
Reflexes are not a relic of the past, but the fundamental technology upon which the entire adaptive Egostat system is built. From the simplest Genoreflexes to the most complex NoReflexes—this is a single continuum of automation. The function of the actual-stimulus processing cycle is to find an alternative to habitual action, accounting for novelty. And this new action becomes a new automatism.
The primary function of consciousness is to form a system of practice-verified, confident automatizations for actual stimuli—actions that no longer require conscious attention, are free from errors of assumption, illusion, and doubt, and enable the most reliable operational response. Thus, without stimuli, there is no consciousness. In sensory-deprivation experiments (participants immersed in warm water, in complete silence and darkness), subjects first entered a passive mode (fornit.ru/68279) and engaged in fantasy, but once mental scenarios were exhausted, they fell asleep.
From molecular reactions in a cell to the most complex professional human skills—all are reflexes of varying complexity: Genoreflexes → CloneReflexes → OptReflexes → NoReflexes.
This pyramid is not a static structure, but a dynamic, self-learning system. New levels do not replace old ones but are built upon them, using them as a reliable foundation. Basic reflexes (e.g., withdrawing a hand from fire) remain unchanged because they guarantee survival. In contrast, social behavior or intellectual problem-solving is continuously updated through the formation of new NoReflexes.
When you first sit behind the wheel of a car, every movement demands intense attention. You consciously think: “Foot on the gas,” “Hands on the wheel,” “Check the mirror.” This is the work of Aten and Iteron. The system analyzes novelty, tries different actions, and evaluates their consequences via the DiffSigner (“Pressed too hard—car jerked—Significance –3”).
After hundreds of repetitions, the system optimizes this process. OptReflexes form (the cerebellum coordinates movements), and ultimately a NoReflex emerges—you drive “on autopilot.” Conscious attention (Aten) is no longer needed for basic maneuvers. It is freed to solve new, unforeseen tasks—e.g., reacting to a pedestrian suddenly running into the road.
The ultimate goal of consciousness is to make itself unnecessary for a given task. This sounds paradoxical, but it is the essence of adaptive efficiency.
Imagine consciousness as a compiler. Its job is to take the “source code” of a new task (experience, idea, problem), process it in high-cost mode (conscious thought), and compile it into efficient, optimized “machine code”—a NoReflex.
Once compiled, the program (automatism) runs quickly, without interpreting each line of code. The compiler (consciousness) is then free to compile the next program.
The evolution of the Egostat is a path from total reflexiveness (insects) to managed reflexiveness (humans). Humans did not eliminate reflexes—they learned to create them. Consciousness is not the opposite of reflexes, but their highest regulatory layer.
Thus, the highest form of adaptivity is not endless conscious deliberation, but the ability to transform the most complex task into a simple, reliable, fast, and error-free reflex. This is the essence of the “circuitry of life.”
Developing a full adaptive system is impossible without the sequential path from the simplest adaptive mechanisms (Genoreflexes) to the most complex mechanisms enabling awareness of the actual. All acquisitions are consequences of augmenting what came before. Therefore, all types of reflexes remain necessary. Each reflex type solves its own unique and critically important task within the overall survival system. Together, they form a hierarchical pyramid of reliability and efficiency: lower levels ensure basic viability, while upper levels provide flexibility and innovation.
In the individual adaptive system, there is an inevitable hierarchical sequence of evolutionary adaptive mechanisms, with each new level characterized by the emergence of new, pre-functional devices ensuring that level’s functionality. These are Genoreflexes. Based on previous levels, the genetic code (or engineering schematics, including software) constructs subsequent ones.
Initial stages contain basic DiffSigner mechanisms in their most primitive form, providing a set of behavioral styles within which specialized stimulus–action reflexes and their chains emerge. At this zero stage (before birth), everything is pre-prepared—and this determines all subsequent development.
One could conditionally divide mechanisms into those ensuring algorithmic functioning and those for direct response (traditionally called “unconditioned reflexes” and “instincts”), but both are genetically predetermined and fundamentally indistinguishable. Each innate mechanism has its functional purpose: some for external response, others for internal regulation.
The next, first developmental stage already adapts to environmental conditions, forming hierarchies of perceptual and action Images. Simultaneously, innate mechanisms emerge for detecting Image actuality (orienting reflex), the priority attention channel (Aten), Historical Memory, and a first-level awareness-depth dispatcher (Dispatcheron), along with cerebellar support. The DiffSigner scheme is augmented with new functionality: generating a signal of the organism’s current Significance state. This is a new generation of innate Genoreflexes.
Any Genoreflex can be represented as a sequence of actions achieving a specific homeostatic goal (corresponding to the traditional concept of “instinct”). Such chains are triggered depending on contextual specifics, so instinctive behavior consists of context-branching action chains.
Example of branching:
The term “instinct” is thus redundant and ambiguous—it doesn’t matter whether a chain contains 1 or 100 motor acts; all chains are always triggered by a unique contextual combination. Therefore, the term Genoreflex suffices. Behavior “branches” not because it’s “instinct,” but because each subsequent act is itself a Genoreflex triggered by a new perceptual feature combination.
Genoreflexes do not “turn on” immediately but activate during critical periods of ontogenesis (e.g., the sucking reflex in newborns). Their functionality depends on timely activation—if the stimulus is absent at the right time, the reflex may fail to form (as in Hubel and Wiesel’s kitten experiments).
Three levels of Genoreflex implementation can be distinguished:
Genoreflexes with external action are triggered by stimuli perceived in context, while those with internal effects are activated by signals dependent on specific conditions. This is not mere “stimulus–response,” but “context + stimulus → response.” This eliminates the artificial division between “behavioral” and “physiological” reflexes—both are components of a unified homeostatic architecture.
Genoreflexes closely resemble electronic control devices built on rigid logic linking receptors and effectors. They can be arbitrarily complex and efficient, fulfilling their functional purpose. Genoreflexes exhibit strict cause-and-effect logic and lack voluntariness.
Unlike immutable, non-learning Genoreflexes, subsequent adaptive levels (CloneReflexes, OptReflexes, NoReflexes) introduce plasticity: they form during ontogenesis, fade with disuse, and optimize based on individual experience. Yet all are superstructures built upon the rigid Genoreflex foundation, without which neither stability nor learning would be possible.
This term replaces the outdated “conditioned reflex,” since the word “conditioned” is superfluous: all reflexes operate within specific contextual conditions.
A CloneReflex copies another reflex’s response but responds to a new stimulus. If a new perceptual element repeatedly co-occurs with a Genoreflex activation, a CloneReflex structure forms, enabling future response to the new stimulus alone. This expands adaptive reactivity to novel triggers.
Unlike lifelong Genoreflexes, CloneReflexes have limited duration and fade without confirmation. This is necessary to:
Thus, a CloneReflex is not just a connection, but a behavioral mechanism with its own properties.
CloneReflex extinction is not “erasure,” but regulation of connection weight between the new stimulus Image and the response program. If the prediction fails (rustle → no mouse), the link weakens proportionally to the discrepancy between expectation and reality. The greater the mismatch—especially if expected benefit fails to materialize or a threat proves illusory—the faster the link loses Significance and fades. This prevents accumulation of “false alarms.”
A CloneReflex can clone either a Genoreflex or another CloneReflex.
CloneReflexes require no reinforcement during formation—only a few repetitions of the new stimulus slightly preceding the old one (though intervals can be long, e.g., 24 hours; the link strengthens with repetition until fully established). This may seem counterintuitive: in I.P. Pavlov’s classic textbook experiment, the conditioned reflex formed only when the bell was followed by food. In reality, without food, no reflexive response occurs to the second stimulus, so there’s nothing to clone onto the new one.
Consider another example: a conditioned reflex forms if touching a water bowl delivers an electric shock. The dog exhibits an unconditioned withdrawal reflex to the shock; after several pairings, the bowl image alone triggers withdrawal. This is called “negative reinforcement,” though withdrawal is simply a reflexive response to shock.
Reinforcement (food, shock) is not the cause of linkage but merely a means to elicit a reflexive response that can then be cloned.
A CloneReflex forms when a new (neutral) stimulus repeatedly precedes an old stimulus that already evokes a reflexive response. The brain copies (clones) the response structure from the old to the new stimulus—not because it’s “rewarding” or “punishing,” but because the new stimulus becomes a predictor of the old, and the response is shifted forward in time to enhance adaptivity.
Example: A fox hears rustling in bushes → sees a mouse → catches it (Genoreflex: predatory behavior). After several repetitions, rustling alone triggers muscle tension and head orientation—even without seeing the mouse. Rustling has become a CloneReflex: it cloned the “hunt” reaction from visual to auditory stimulus. The reaction now triggers earlier, saving time and resources by anticipating events.
It should be noted that a CloneReflex is still an ancient adaptive mechanism, operating below the level where the DiffSigner evaluates reaction effectiveness via consequences. Such evaluative experience requires a high level of awareness—but CloneReflex formation involves no actual novelty to attract conscious attention. CloneReflexes continue to form even in lobotomized subjects.
However, in software implementations, nothing prevents using DiffSigner functionality to enhance CloneReflex efficiency: if it leads to positive outcomes (prey, pain avoidance), its Significance increases and it strengthens; if negative (false alarm, missed opportunity), Significance drops and the reflex fades.
The cerebellum receives information about the goal of action optimization (fornit.ru/23500), an attribute of the target-oriented awareness process. Its function in achieving a desired movement outcome is to store the precise combination of dosed force efforts optimal for the desired effect, while accounting for interfering factors (e.g., shifts in center of gravity). The cerebellum uses sensory signals as indicators of action-phase completion and potential obstacles. Thus, it significantly offloads the awareness process—not only for motor actions but also for mental ones. Cerebellar pathologies dramatically slow learning, as awareness must then generate countless coordinating reflexes itself.
Imagine learning to pour water from a full pitcher into a glass. Initially, you move cautiously: hand and wrist muscles are tense, motion is jerky. You consciously control every millimeter. After several repetitions, the cerebellum memorizes the exact combination and sequence of muscle efforts (dosage of force) that produce smooth, spill-free pouring. It creates a refined motor skill. Soon, you can pour water without looking and while talking. Consciousness is free for other tasks, while the cerebellum automatically executes the stored program.
The cerebellum receives information about the reaction’s goal and forms reflexes ensuring its efficient achievement during new reflex execution.
These are second-order, auxiliary reflexes that “tune” primary reflexes to current conditions, ensuring goals are met efficiently, without excess energy expenditure or conflict with other actions. One might say this is an evolutionary attempt to create a parallel support mechanism handling the simplest but most frequent corrections.
The cerebellum continuously receives input from the vestibular system (balance), proprioceptors (joint and muscle position), and vision, using it for instant adjustment.
Example – Walking on ice: Your primary reflex is to step. But the cerebellum, detecting instability, instantly tunes this reflex:
These are the “second-order reflexes” tuning the basic walking reflex to current hazardous conditions.
Example – Picking up a cup you think is heavy: You reach for a cup you believe is full. But it’s empty. Your hand jerks upward. Why? Consciousness commanded “lift heavy object,” and the cerebellum prepared corresponding muscle force. Without timely feedback, it applied the stored “force program.” Had the cerebellum known the cup was light, it would have precisely dosed the force.
The cerebellum is both an autopilot and a trainer. If consciousness (the awareness process) is the chief pilot setting global goals (“take off,” “land in another city”), then the cerebellum is the automatic flight control and stabilization system, handling all routine calculations for smooth flight.
It is believed the cerebellum optimizes cognitive processes by the same principle as movement—making them smoother, more precise, and timely. When we speak, we don’t just utter words. We structure them logically, select appropriate terms, and modulate pace and intonation.
With cerebellar pathology, speech can become “scanned” (broken into syllables), slowed, and lacking fluency—akin to clumsy movement, but in the speech domain. The cerebellum helps “coordinate” speech “movements.”
Rapidly switching from one task to another is also a skill. It is hypothesized that the cerebellum facilitates this process, making it fast and efficient, without “hang-ups.”
Thus, the cerebellum is not merely a movement coordinator, but a fundamental system for optimizing and automating any goal-directed action—whether running or solving a logic problem. It is consciousness’s chief assistant, handling the titanic workload of detail calculation.
Despite this, both cerebellar mechanisms and their resulting reflexes are remarkably simple—so much so that successful attempts to create an artificial cerebellar prosthesis have been made, though full human implants do not yet exist. The most famous example is an experiment on rats conducted by Israeli scientists led by Prof. Matti Mintz of Tel Aviv University. They developed a device capable of partially replacing a damaged cerebellum.
Rats with artificially lesioned cerebellums could not perform a simple reflex—blinking in response to an air puff to the eye (conditioned eyeblink reflex).
The prosthetic device had:
A simple computer algorithm linked these signals via a reflex principle. Upon detecting the puff, it predicted the blink command and issued it.
After calibration, rats with disabled cerebellums relearned to blink in response to the stimulus. The artificial chip successfully replaced the damaged region, implementing its core function: sensory stimulus → motor response.
Although such experiments show the technology’s promise, developing a full human implant remains challenging. The cerebellum plays a key role in movement coordination, balance, and motor learning, and replicating all its functions requires far deeper understanding of brain structure and function.
The entire cerebellar cortex consists of identical modules operating on a unified principle—like a processor made of millions of identical transistors. There’s no need to invent a new algorithm for each muscle—the same computational principle applies to all inputs.
The cerebellum receives two main signal types:
Core algorithm: “Compare and Correct”: The cerebellar microcircuit constantly solves one task: compare planned movement (from cortex) with actual execution (from sensors). If there’s a mismatch (error), immediately correct the command sent to muscles.
This correction occurs via inhibitory output from Purkinje cells to cerebellar nuclei. When movement proceeds as planned, inhibition is weak. When error occurs, inhibition increases to adjust the command.
Without the cerebellum, every movement would require full Aten attention. We couldn’t walk and talk simultaneously—each foot placement would demand conscious deliberation. The cerebellum frees consciousness from routine, allowing it to focus on novelty and strategy rather than tactics.
Despite its role in goal-directed actions, the cerebellum has no access to Aten. It operates entirely unconsciously. Even with complete cerebellar destruction, the subject remains conscious—they simply lose smoothness, precision, and the ability to quickly learn new coordination skills.
Clearly, implementing cerebellar functionality in an artificial system can employ diverse circuit designs and communication methods with the awareness process, affecting overall adaptive efficiency. For example, after each action, the DiffSigner can evaluate how close the result was to the goal. This evaluation (on a –10 to +10 scale) can be used by the cerebellum to fine-tune the next OptReflex. Thus, even “automatic” coordination remains under the ego-centric logic of the Egostat.
The primary function of consciousness is to form reflexes alternative to habitual ones, accounting for situational novelty. Such reflexes are called automatisms, emphasizing their origin.
Automatisms exist as structures (just as CloneReflexes are objects with specific properties and behaviors—akin to class instances in programming, but in adaptive circuitry terms, they are Images).
Initially in ontogenesis, an array of such objects forms based on existing motor Genoreflexes. As the neocortical Image-recognition system develops, activity in those Images coinciding with Genoreflex activation becomes the trigger stimuli for primary automatisms.
In software implementation, one can simply convert all Genoreflexes into primary automatisms linked to terminal Images in a single operation (as done in the Beast system). From this point, it becomes possible to voluntarily (i.e., alternatively to habit) modify automatism properties and, in situations with significant novelty, construct arbitrary action chains optimized for goal achievement.
During awareness of an actual stimulus, the reaction chain can be paused at any link to either:
Example: Musical instrument skill—initially each movement is consciously controlled; later, a NoReflex forms, enabling “automatic” playing while preserving expressiveness aligned with emotion and audience.
Perception may contain elements not detected as actual novelty and thus not consciously processed—but when actions yield unexpected negative results instead of expected positive ones, problems arise.
No action in response to a stimulus can be fully insured against such surprises. Therefore, the very first and most ancient level of awareness depth must always include a check for potential unexpectedness, based on experience stored in Historical Memory. Elements of novelty not detected by the condition-tree activation (and thus confidently triggering a linked automatism) may, upon a conscious “glance” at details, reveal that similar details previously led to negative consequences.
This process of selecting noticed detail Images from the perceived scene is extremely fast. If nothing alarming is found, the verification result is no longer consciously processed, and the automatism linked to the terminal Image executes.
If information is found indicating similar details previously caused problems, it blocks the automatism and takes control to seek a better solution.
In this “just-in-case” monitoring mode, consciousness is not the main “conductor” of behavior, but a correction mechanism for the “autopilot” in new or changing conditions. Automatic reactions (automatisms) perform the main work, while consciousness intervenes only when the habitual script fails or can be improved.
Examples of first-level awareness verifying an automatism before execution:
The same awareness level responsible for automatism verification also handles refined recognition when an activated Dendrarch branch is insufficiently specific. The Image is recognized at a generalized category level—e.g., “person,” “car,” or “familiar face”—but doesn’t allow unambiguous identification in the current context.
In such cases, the system automatically directs attention to details that may clarify identification. This isn’t arbitrary search, but targeted retrieval from Historical Memory: frames formed in similar conditions are activated and compared with current perceptual features.
If contexts match—lighting, pose, background, emotional tone—recognition occurs almost instantly, without full Aten engagement. The system simply “finds” the right frame and confirms: “This is my grandmother,” “This is my car,” “This is the accountant.”
This process doesn’t require deep conscious thought, but it goes beyond pure reflexive recognition, as it relies on individual experience stored in Historical Memory. This is a hybrid mode: not novelty requiring interpretation, but not full certainty allowing automatism without verification.
This is why we easily recognize someone in a familiar setting but may momentarily “freeze” upon encountering them unexpectedly—say, on a beach in a Panama hat. The context doesn’t match, the frame isn’t immediately found, and a deeper interpretation cycle engages. But as soon as confirmation occurs—even via minor cues like gait, gesture, or intonation—the system instantly completes identification and returns to automatic mode.
The awareness process is implemented by a system of interconnected innate mechanisms, each specialized for a specific function. Together, they form a unified system to which the Actual Stimulus is connected via the Priority Attention Channel (Aten) for the duration of awareness.
The essence of this process is to step-by-step solve the problem of finding a reaction alternative to habitual behavior, accounting for significant novelty. The sequence of steps is determined by the Global Informedness Picture (Infocontext), which is updated with new information after each step. This updated context then defines the conditions for selecting the direction of the next step. Thus, the awareness cycle operates as an iteration guided by the dynamic state of the Infocontext.
In other words, the awareness cycle unfolds within an Infocontext continuously updated by the information generated at each step. The direction of each step is chosen by the Awareness Function Dispatcher (Dispatcheron), which issues a query to one of a set of innate Informational Functions (Infofunctions). The primary data source for these queries is Historical Memory, along with other data repositories (the awareness interruption stack, generalization buffer, automatism array, and the Dominanta of an Unsolved Problem).
The cycle continues until a desired alternative reaction is formed—either as a new action or as a modified automatism. Once a solution is found and consolidated, the Orientant loses its actuality, Aten is freed, and the new reaction can subsequently function as a NoReflex—without conscious involvement. This occurs after just one session of interpretation, in contrast to the multiple repetitions required to form a CloneReflex (in natural implementation, consolidation occurs through sustained activation of the held stimulus).
If no solution is found but the search remains highly significant, the search conditions and current state are saved in long-term memory as a Dominanta of an Unsolved Problem (a Gestalt in psychological terms). The Gestalt maintains a high drive to resolve the issue and acts as a powerful motivator, returning awareness to the deferred problem whenever conditions allow. Memory of stored Dominantas represents a second-order quality of Historical Memory: not only individual meanings and rules are recalled, but also solution pathways, strategies, and even informative errors. The Gestalt is the fourth and deepest level of awareness—the level of creativity.
Thus, awareness is not a state, but a goal-directed process of searching for an adaptive alternative, governed by the dynamics of the Infocontext and executed through a strictly defined set of innate functions. Its purpose is not “to think for thinking’s sake,” but to create a new automatism that renders repeated awareness in similar situations unnecessary.
There is no single “center of consciousness.” Instead, there is a network of specialized innate modules.
To describe how consciousness works, we must examine the functionality of all its components.
Innate Components of Consciousness:
Updatable Structures:
To set a goal and take steps toward identifying potential actions that could achieve it, a working memory is needed to hold all initial and intermediate results. This memory must be shared among all components involved in awareness.
For each type of information critical to interpretation (as determined by evolutionary optimization), a dedicated memory slot exists, connected to the awareness components that use that information. Whether these slots are localized or distributed is irrelevant and not considered here. What matters is that all slots form a unified, coherent context that:
Crucially, these slots constitute a single informational context—a unified system of informedness whose activity guides every step of interpretation. The information obtained at each step updates the Infocontext, which in turn sets the direction for the next step.
Unlike episodic memory—where a new neuron (~1,000 per day) specializes as the activator of a memory frame (fornit.ru/70648)—the Infocontext structure is predefined. Otherwise, the Dispatcheron would not know how to handle a novel, unknown context.
The entire step-selection process, based on the Infocontext, is managed by the innate Awareness Function Dispatcher.
We can imagine that the earliest aware organisms had very few such slots, limiting their interpretive capacity. However, competitive evolutionary optimization has refined this system across species possessing awareness mechanisms, resulting in species-specific versions best suited to their ecological and behavioral conditions. Even within a single species, individuals exhibit slight variations in Infocontext structure and other awareness components.
Scientists B. Baars (fornit.ru/70033), G. Tononi (fornit.ru/70040), and D. Dubrovsky (fornit.ru/70862), studying consciousness in biological adaptive systems, proposed models of Infocontext organization and function:
Modeling the artificial Beast system led to a logical, optimized version of awareness cycles and the Infocontext’s role within them. Given its open-source code, this represents a programmatic formalization of consciousness algorithms.
If we could visualize brain component activity, we would see that upon waking, the system prepares the Global Informedness Picture: all awareness memory slots are initially inactive, but organismic arousal provides baseline signals indicating the state—Norm, Bad, or Good. This is the first informing slot of the global Infocontext.
At the level of ancient homeostatic structures, specific Vitals requiring restoration are identified (feeding, reproductive, defensive behavior, etc.). In the prefrontal cortex, this activates mirror-images of emotional states and corresponding memory slots, generating a subjective emotional experience. This initial activation is saved as the first episode of the current state—along with all situational details—as a Historical Memory frame.
Next, the Infocontext is refined through queries for additional information: eyes open to acquire perceptual Images; slots for “Who am I?”, “Where am I?”, “What’s happening?” are activated.
Thus, the Infocontext is not a metaphor or abstraction—it is a working mechanism already proven functional. It is the core of the awareness process, without which voluntariness, creativity, or even basic adaptation to novelty would be impossible.
Here, “information” does not mean:
Instead, it refers to Images linked to Significance, informing about some aspect of current awareness.
The key aspect of information is its capacity to inform—to bind Infocontext components to their context-dependent Significance, enabling the awareness dispatcher to assess component actuality. Infocontext slots can be implemented in many ways; in the Beast system, they are represented by IDs of objects reflecting condition-tree levels:
The first step of awareness may involve retrieving the Significance of this symbol from historical experience—first by exact contextual match, then by broader conditions if data is insufficient.
Additionally, features revealed by detailed analysis of the terminal Image may be considered—including details not part of the terminal Image itself but noticed during recognition (e.g., a scratch on a face, smoke smell in background noise, unusual font in a letter). These can reveal extreme Significance (positive or negative). If such details were previously associated with extreme outcomes (e.g., threat), they instantly elevate the stimulus’s actuality—even if the main Image seems neutral—requiring special processing during interpretation.
This is a nontrivial concept. Information is often confused with raw data or sensory input. But data alone are always conditional knowledge (personal informedness) and thus accessible only to those who understand the conditional symbols of informedness. A cat seeing a book gains no information—it has no subjective Significance. For the cat, it is merely an object. No Significance = no information.
Data are conditional symbols. They become information only when the subject assigns them Significance within their adaptive context.
Dubrovsky insisted that the psyche is a subjective form of information (ideal in nature)—generated by the brain but not reducible to physical processes. A book is a physical object for a cat, but not informational, because the cat lacks the psychic code to “unpack” it.
This aligns with the “extended mind” concept (Clark & Chalmers), with a crucial clarification: a book becomes information not merely by existing externally, but only when the system (consciousness) can endow it with Significance within its internal context.
All elements (significant Images) of the Global Infocontext constitute a coherent, mutually reinforcing understanding of the situation.
For an Image’s Significance to inform the subject, it must be brought into conscious attention. Outside the single actual-stimulus processing channel, there is no informedness—equivalent to awareness, since informedness is possible only through stimulus awareness.
William James, founder of American psychology, spoke of the “stream of consciousness” but noted it consists of discrete “perceptual atoms” or “minimal units of experience.”
The Significances of individual Images contributing to the Global Infocontext serve as elementary components of informedness (quanta of consciousness). Together, they create the conscious context of the current situation and interpretation stage—i.e., the subjective experience that evolves with each interpretation step.
Thus, significant Images are quanta of consciousness—elementary units of informedness.
No real mechanism can instantly determine a goal and find actions to achieve it in one step—especially since awareness operates in two competing modes:
Target mode always takes precedence over passive mode. In traditional neuroscience, passive mode corresponds to the Default Mode Network (DMN).
In passive mode, novel combinations emerge, linking disparate Images to generate ideas not present in objective reality—yet potentially useful for guiding behavior. Passive mode is the thoughtful fantasist of mental experience, whose creations can be tested in reality.
Early computer games used complex branching logic with AND/OR operators, creating unmaintainable “code jungles.” A stepwise iterative architecture is far more efficient:
This structure is simple, effective, and precisely what evolution arrived at for processing significant novelty.
With each new Actual Stimulus (held in a hippocampal feedback loop), an infinite cycle begins: the dispatcher receives initial informedness and selects problem-solving steps that update informedness with their results (fornit.ru/69997). The dispatcher monitors whether the functional task is complete and the stimulus has lost actuality.
This stimulus-triggered cycle is the main interpretation cycle. It can update the Infocontext through its iterative steps.
Animation demonstrating the awareness cycle: fornit.ru/demo17.
Interruption Stack
If a more actual stimulus arises, the current main cycle becomes background but continues running. It loses the ability to update the Infocontext (to prevent conflicting updates from multiple background cycles) but still uses the current Infocontext.
Upon interruption, the ID of the suspended main cycle is pushed onto an interruption stack, allowing return after the new actuality is processed (e.g., returning to a computer task after answering the door and phone).
The number of possible interruptions is evolutionarily optimized per species.
This mechanism is fundamental to the subjective sense of conscious continuity and goal-directed behavior. Despite constant attentional shifts, we maintain a “thread” of complex activities thanks to dispatcher-stack cognitive continuity.
Generalization Buffer
Another feature of interpretation is using memory to preserve meanings of perceptual fragments for holistic understanding. A child reads words letter by letter; an adult grasps whole words or sentences at a glance.
Buffer capacity is ~3–4 items in higher animals, ~5–7 in humans—but optimizes with experience. Excessive capacity may include irrelevant elements from prior generalizations, potentially causing schizophrenic-like effects. Healthy thinking requires clear separation of relevant from irrelevant stimuli. The dispatcher and generalization mechanism act as a filter—and theories of schizophrenia (e.g., “hyper-associativity” or “contextual binding collapse”) suggest this filter is impaired.
Background cycles constitute the unconscious—they do not update the Infocontext and thus remain unexperienced. However, if extreme Significance emerges within one, it can competitively trigger an insight, making that cycle the new main one (while the former main cycle becomes background).
By day’s end, many background cycles accumulate (including perception-side holds), increasingly hindering awareness. These cycles more frequently “break through” into consciousness. In natural implementation, this risks hyperactivation resembling epileptiform activity.
Thus, in relaxed or sleep states (when no external stimuli demand response), the system processes accumulated background activity in passive mode, sequentially deactivating completed cycles. Dreams serve this function—correcting Historical Memory frames. By morning, the system is cleared of activations without losing informational value.
Unlike a gaming computer, consciousness is stimulus-bound. Without input, it shifts to passive mode, then sleep—because its primary function is to find alternatives to habitual responses.
Passive-mode steps use different Infofunctions than target-mode steps.
The main awareness cycle is protected from irrelevant background interference (“thought silence”: fornit.ru/17954). It raises its actuality-detection threshold (depending on task importance) to prevent disruption. Only highly significant stimuli can capture Aten—otherwise, goal-directed behavior would be impossible.
An Infofunction is an innate mechanism specialized for obtaining specific information—typically via targeted retrieval from:
Specialized mechanisms for single interpretation steps are evolutionarily feasible—their complexity does not exceed that of instinct chains.
While instincts took hundreds of thousands of years to refine (e.g., intricate wasp nest-building), awareness mechanisms evolved over hundreds of millions of years in vertebrates—from fish onward—allowing far greater sophistication.
Prefrontal cytoarchitecture reveals functional differences across species. For example, Brodmann Area 10 (prefrontal cortex) occupies:
Cognitive abilities directly correlate with the arsenal and efficiency of Infofunctions.
Each evolutionary deepening of awareness introduces new innate elements:
The Beast system implements 32 specialized Infofunctions, each with potential for evolutionary optimization and competitive advantage.
One early Infofunction identifies the immediate goal for the current Actual Stimulus—e.g., restoring state from Bad to Good—defining the expected outcome of the awareness cycle.
Historical Memory is the primary information source for awareness. Each memory frame preserves key events:
For faster access, the following are also stored separately:
In Beast, no detail Images exist—every significant novelty creates a new Image—so only the branch-node ID is saved.
Beast’s Historical Memory forms its own optimized retrieval tree, which may also occur in natural implementation.
Early natural implementations stored only Significance in context (semantic memory):
Later, memory frames were enhanced to include:
This creates an elementary behavioral rule:
“If conditions X → Image Y → Action Z → Result R (+5 / –7)”
By observing others, an individual mirrors such rules—learning life strategies without fatal errors. In Beast, operator actions are treated as authoritative, so mirrored rules are saved with maximum positive Significance (faith).
Specialized Infofunctions retrieve context-matching rules for trial behavior. The longer the matching rule-chain, the more confident the action prediction.
Most conscious behavior follows this principle: blitz chess, fluent conversation, passive-mode daydreams, and dreams all unfold via rule-chain activation.
Every act of conscious attention generates a new Historical Memory frame, enriched with interpretation results. Thus, Historical Memory is the “I”—preserving the individual’s unique experience and its ego-centric Significance.
The Significance profile of the attentional object is extracted “on the fly” from Historical Memory during early interpretation steps, enriching the Infocontext with object properties. Each new perception updates the model with context-specific Significance, enabling ego-centric prediction of the object’s impact.
The Semantory’s core function is not description, but prediction of ego-centric Significance—a direct extension of the Egostat’s adaptive goal. “Understanding” = “predicting interaction consequences” (Meaning = conscious ego-centric Significance).
The Semantory is not a static database, but a dynamic model of how an object affects the organism under varying conditions. When combined with behavioral rules from mirrored observation, objects acquire Significance based on observed outcomes of their behavior.
The Semantory allows avoiding trial-and-error in novel-but-similar situations.
In Beast, a dedicated working memory for Semantory content was initially implemented but later deemed redundant—direct Historical Memory retrieval proved sufficiently fast and convenient.
The Dispatcheron implements the awareness algorithm across all conscious and unconscious processes, flexibly modulating flow based on the current Infocontext. Key informedness changes redirect the algorithm to specialized pathways.
The Dispatcheron is a network of innate reactions (Genoreflexes with internal effects), co-evolving with other innate awareness mechanisms (Historical Memory, Infocontext, Infofunctions).
It regulates key process elements:
Though this constitutes an algorithm, it is not a fixed program. At each step, the Infocontext is updated with experience-based information from Historical Memory—embedding real-world cause-effect logic into the process. Guided by ego-centric Significance, this manifests as voluntary choice in interpretation direction, experienced subjectively as continuous ego-centric flow.
All innate mechanisms and background cycles operate “in the dark”—their work is unobservable and unassessable. Only the pure dynamics of Infocontext updating can be evaluated—if Aten is directed to the awareness process itself as an Actual Stimulus, producing self-awareness (fornit.ru/1277). If consciousness is the processing of an actual stimulus, then the processing itself can become that stimulus.
Significance evaluation during interpretation is performed by specialized Infofunctions, refining the Semantory—including self-understanding.
The Dispatcheron is the central coordinator integrating all components (Infocontext, Aten, Historical Memory, Infofunctions, awareness levels) into a unified, flexibly managed process. But it is not a homunculus—merely an innate mechanism.
This fully aligns with the spirit of “circuitry of life”: no mystical centers—only cause-and-effect relationships.
We can now deepen our understanding of the key components of the awareness process.
The proposed model accounts for the vast array of existing theories of consciousness (fornit.ru/23531, fornit.ru/19813), their classifications, and criteria for evaluating theories of consciousness (fornit.ru/68293). All significant publications have been reviewed and critiqued (fornit.ru/a11), including exotic theories (fornit.ru/69716). A “defectoscope” for consciousness theories, grounded in scientific methodology, is also available (fornit.ru/68875).
A goal is not an abstract desire, but an operational component of the Infocontext that defines the criterion for terminating the interpretation cycle. The goal is the bridge between problem and solution, between novelty and automatism.
Reflexes and automatism do not require motivational goals; they are triggered by any initiating stimulus within the current context of an active behavioral style (feeding, reproductive, exploratory, defensive, etc.). However, when a problem arises in finding an appropriate reaction under novel conditions, something must define what this new reaction should achieve. Let us call this goal-directed motivation (fornit.ru/67888), or simply a goal. The goal is the most general context for interpreting the actual stimulus in the attempt to find actions that achieve it.
The need to define a conscious goal arises only at the psychic level, and during the interpretation cycle, the goal can be arbitrarily adjusted. Below the psychic level, reactions are executed strictly within one of the homeostatic behavioral contexts.
A goal is information for the Infocontext
that preserves what must be achieved through one’s actions.
It is an Image + Significance + achievement criterion (the features indicating
successful outcome).
Not just “bicycle,” but “I’m riding a bicycle through the park = +7.”
A specialized Infofunction identifies the goal on the first step of the interpretation process, in either target or passive mode. This Infofunction has been evolutionarily refined with ever-expanding capabilities. In target mode, it initiates a search for actions capable of achieving the goal. Without a goal, this mode is impossible. In passive mode, the goal provides the context for fantasy development. Even without a defined goal, a scenario will still be generated—but without a specific context.
The earliest evolutionary goals are homeostatic states—for example, normalization of energy, water, or oxygen Vitals (each subsequent one having higher Significance than the previous), protection from damage, etc. The most general goal can be the state of Good, which carries high Significance.
The goal-identification Infofunction detects current needs and competitively selects the most significant goal under the given conditions.
At a more complex evolutionary level, this Infofunction uses Historical Memory to determine what actions lead to improved states. This allows setting goals that may initially cause negative sensations but ultimately yield high positive outcomes. Thus, a goal might involve provoking a reaction from another entity (living or inanimate) or from oneself.
The Significance of social ethics can so outweigh homeostatic disruption that goals directly harmful to one’s own state may be activated. The same applies to beliefs in self-discipline, religious convictions, or personal overvalued ideas.
Without goal-directed motivation, there will be no attempt to solve the reaction problem, and interpretation defaults to passive, goalless mode (fornit.ru/68279), during which Image Significances are clarified and new Image combinations are discovered.
Even more passive states are called laziness (no need to interpret anything at all: fornit.ru/652) and stupor (a need to react, but no idea how to do so in a novel situation: fornit.ru/989). It is not that laziness causes the absence of goal-directed motivation; rather, goal-directed motivation fails to arise when the organism’s adaptive systems shift it into a state of non-urgency.
Laziness can be overridden by urgent needs to restore homeostatic Vitals, signs of danger, or the genetically predetermined (“unconditioned reflex”) motivation of “Dissatisfaction with the Status Quo” (fornit.ru/870).
Every new stimulus that triggers interpretation must be checked for whether it offers an opportunity to resolve a problem stored in the Dominanta array. If so, an attempt is made to perform an action, which is then evaluated by its effect. If the effect is negative, the Dominanta remains open.
Although Dominanta data structures have no inherent expiration, during laziness or sleep a function scans the Dominanta array and closes those that are no longer relevant.
A Dominanta is a goal deferred in time due to the absence of conditions for its realization.
If static Images are activated by recognizers of unique perceptual or action feature combinations, then abstractions are needed for use in the awareness process—informational representations whose Significance can be arbitrarily modified, and from which new Images can be constructed.
Abstractions are so detached from reality—though they reflect it (it is impossible to imagine something entirely novel, as all mental content is assembled from known elements)—that they can be manipulated freely, forming the most fantastical combinations. This is precisely what occurs in passive-mode awareness. Abstractions like “unit,” “happiness,” “truth,” “meridians,” “time,” etc., do not exist in physical reality; they are mental constructs that emerge through understanding, mirroring, and personal fantasy.
Most abstractions are products of mirroring; only in creative, goal-directed, or passive fantasy do original representations arise—often before they are verbalized. When the need arises to communicate such representations, one must select words, analogies, and invent terms to denote them.
The importance of abstractions lies in their ability to suggest entirely new actions, which can then be tested in reality under suitable conditions. Passive mode is thus a source of novelty more flexible than mere mirroring of others’ behavior.
Abstractions have a dedicated storage area within the psyche. Initially, static Images simply form their analogs—abstractions—during Significance evaluation in Understanding Models (fornit.ru/69260). Later, they acquire elements uncharacteristic of objective reality.
The simplest abstractions, most fundamental to awareness, are the organism’s homeostatic baseline states: Norm, Bad, or Good. At the interpretation level, these states give the Dispatcheron clear direction for the next problem-solving step:
Abstractions do not exist at the level of physical processes—they emerge at the level of description, interpretation, and subjectivity. Yet this does not make them less real; they are real as structures of experience and mechanisms of understanding that shape behavior.
An abstraction is a quantum of consciousness. It is universal: the same abstraction evokes identical understanding in any mind. The abstraction “one” is shared by all rational beings. This does not correspond to physical object properties—even two identical nails are distinct physical entities. In this sense, abstractions are immaterial (fornit.ru/69763) and possess specific properties (fornit.ru/71010):
Property of Arbitrary Autonomy
Abstractions can be mentally transformed,
combined, destroyed, and recreated at will. Unlike perceptions, which are
rigidly tied to external stimuli, abstractions are subject to the agent’s control. This underlies fantasy, planning, and thought
experiments.
Example: mentally “repainting” a car or “rearranging” furniture—manipulating abstractions without altering reality.
This property is the foundation of creativity and imagination. Without it,
there would be no art, science, or goal-setting.
Implementation Independence
Arbitrary adaptive systems can be implemented in any way that ensures experiential informedness—on transistors, in software, etc.—yet they will generate abstractions with the properties described here. What is implemented on neurons in one system can be implemented on transistors in another (fornit.ru/art10). This points to the informational, not material, nature of abstraction.
Property of Uniqueness
This is a strong version of functionalism, not universally accepted. Many philosophers (e.g., proponents of “philosophy of mental content”) argue that qualia can be subjective and not fully reducible to functional relations.
Innate Ego-Centricity (fornit.ru/70018)
The Significance tied to abstractions generates a holistic sense of self (fornit.ru/70860). However, the complete arbitrariness of abstractions contradicts the uniqueness of the subject, creating only an illusion of self-identification (fornit.ru/70640). We feel unique, but everything we imagine and experience consists of impersonal abstractions, even those meant to capture uniqueness.
In other words, there is nothing in us that makes us unique in the awareness process. Two snowflakes, identical in form, size, and chemical composition, are still two distinct snowflakes—their uniqueness lies in their location in physical space-time (fornit.ru/70790). But two thoughts lack physical uniqueness; they are based on the same abstractions, indistinguishable in essence. The concept “one” is a single experiential meaning for all who use it. Although each abstraction is physically represented by a recognizer in the brain, to the subject it has no connection to the physical world—it can be implemented in any way that ensures experiential informedness (fornit.ru/68830), yet its mental essence remains universal.
Uniqueness is the impersonality of abstractions in their universal experience. Any individual experiences informedness through the same abstractions as any other being possessing them—indistinguishably.
This property, though seemingly sacrosanct to the point of controversy, leads to profound conclusions described in the article on Ego (fornit.ru/1648). It is precisely this property that allows the deepest insight into subjective experience and the foundations of subjective informedness (fornit.ru/70860).
Property of Contextuality
Due to uniqueness, abstractions are easily formalized—assigned conventional symbols for communication. In nature, this occurs via verbal or nonverbal signals. Correct understanding of a formalized abstraction requires transmitting the conditions of its application, because Significance is always assigned within a specific contextual situation. The same abstract Image can have opposite Significances in different contexts: an apple is positive when hungry, negative when overfed.
Thus, formalization must denote not only the abstract Image but also its Significance context—or simply transmit the Significance itself. An abstract Image without defined Significance has no meaning (meaning = conscious Significance of an Image) and degenerates into an uninformative essence. Abstractions without Significance do not exist.
Many, especially mathematicians, rely on formal rules and postulated logic to generate new representations—but this is an illusion. Mental models (systems of abstractions with specific interactions) always arise first; only then can they be formalized for communication.
Examples:
Property of Categoriality (Level of Abstraction Embedding)
Understanding Models gradually develop a hierarchy of embedded abstractions through experience. Competence in a domain depends on how many levels of this hierarchy are formed. A child or novice possesses only the first level: direct Image–Significance binding in current conditions.
The article (fornit.ru/70928) describes how conflicts in mutual understanding arise from differing competence levels, as in the Dunning-Kruger effect. Low understanding limits the ability to forecast success because one lacks experience with the depth of unsolved problems in the field, creating an illusion of clarity.
Abstract categories are based on prototypes—“most typical” examples (e.g., a sparrow, not a penguin, is the prototype for “bird”). Abstractions here are averages or generalizations from experience in specific conditions.
Such abstractions can form relationships (e.g., “time is money,” “love is a journey”) with context-dependent meaning. This evolves beyond primitive semantic memory (where an Image is linked only to its Significance) to episodic memory frames, where a stimulus is linked to a response and the Significance of that response—yielding more complex behavioral rules.
However, only abstract Images have internal elements that act as recognizers (fornit.ru/5389, fornit.ru/5089); relationships are stored in Historical Memory frames and retrieved via mental queries during interpretation (fornit.ru/68522).
People group abstractions into categories (e.g., “bird,” “tool”) with radial structures (central and peripheral examples), not strict definitions.
Examples of abstraction hierarchy
development:
Example 1
Example 2
Understanding Model hierarchies may be linked to formal symbols for communication or remain private to the subject’s interpretation.
Initially, these hierarchies mirror the perceptual Image hierarchy from the tertiary parietal cortex (correlation with the objective world, from simple features like lines/angles to complex ones like faces/objects: fornit.ru/70785).
Experientiality
Abstractions are not mere symbols—they are accompanied by an inner sense of meaning: “what it is like to think of 5,” “what it is like to love.” Computations occur in adaptive interpretation algorithms, while experiences accompany the informational results as a subjective Infocontext that evolves with each interpretation step (fornit.ru/70864).
Even a formalized abstraction (e.g., an equation) evokes qualia of understanding—“illumination,” “clarity,” “coherence.” A mathematician who grasps a proof feels “beauty” or “elegance”—these are qualia of abstract understanding.
This property distinguishes mechanical symbol manipulation (as in AI) from conscious understanding, marking a conventional boundary between computation and consciousness—just as conventional as the boundary between an object’s form and its content.
Translatability
Abstractions can be communicated to others,
but never fully as they exist in one’s mind. Transmission requires
conveying context, Significance, and experience—which is
not always possible. The newer or deeper the Understanding Model, the greater
the distortion in transmission.
Example: A teacher explains quantum superposition via Schrödinger’s cat, but the student receives only a distorted, simplified
version.
Generativity
Abstractions do not merely store experience—they generate new understanding. By combining abstractions, a
subject can create something never perceived—e.g., “unicorn,” “infinity,” “null space.” This underlies scientific hypotheses, mathematical constructs, and
philosophical concepts.
Example: The concept of “imaginary numbers” began as abstract play but later enabled powerful applications in
physics and engineering.
Translatability and Generativity logically follow from the above. They describe the challenges of transmitting deep abstract models and the capacity to generate new concepts (unicorn, imaginary numbers)—an absolute Significance, independent of culture, personality, or context.
Informational Properties of Abstractions
The awareness process is an iteration of discrete interpretation steps within the Priority Attention Channel (fornit.ru/70759). This process has a common information structure (fornit.ru/68540) updated at each step, enabling the next step’s direction to be determined in a new context. Conscious experience is the subject’s informedness (fornit.ru/69997).
The quantum of conscious experience is information: an Image (perceptual or action-related) linked to its Significance in the current situation—i.e., an abstraction. Thus, consciousness consists exclusively of abstractions.
An information portion (fornit.ru/68830) is an abstraction of a specific type (Image linked to Significance), retrieved (usually from Historical Memory: fornit.ru/67560) to update global informedness and create context for the next interpretation step.
An abstraction is the minimal unit of conscious experience—informedness about the currently revealed aspect of a problem (target or passive). Each interpretation step is accompanied by conscious experience of new information. Each experience is an abstraction carrying meaning (conscious Significance). Consciousness is an informational process in which the subject iteratively updates informedness via Significance-based abstractions.
Most abstractions entering consciousness are not created on the fly but retrieved from Historical Memory, which stores past events with context (where, when, how one felt). Upon a query (e.g., “What is that sound?”), the system searches for similar patterns, retrieving abstractions laden with Significance: “This happened before a fall,” “After this, my mother comforted me.” Thus, memory is not an archive but an active participant in interpretation, supplying meaning-laden abstractions.
Experience is not “what happens in the head.” It is what the subject knows at this moment about self and world, with Significance. When you say, “I’m sad,” “I remember that day,” “I feel misunderstood,” you are fixing your current informedness—a chain of abstractions in the Priority Attention Channel.
No abstractions → no meaning. No meaning → no experience. Only at the level of Significance-laden informational abstractions does “I am experiencing this” arise.
The word “meaning” is especially vague—not only in dictionaries but also in scientific literature. Often it is discussed in the context of “the meaning of life.” To say that the meaning of life is the subject’s conscious Significance of life is disappointing to those seeking a sacred destiny. Yet philosophers rejecting mystical “explanations” conclude that meaning is Significance.
The word “interpretation” is clearly tied to meaning and Significance formation. Interpretation is the act of conscious assignment of Significance to a phenomenon. “Understanding” is the result of interpretation—a state where meaning becomes integrated, holistic, and conscious (fornit.ru/1073).
Initially, meaning is derived from existing Understanding Models (the semantic part of Historical Memory). If no information is found for an object of attention, it appears meaningless and incomprehensible. With each interpretation, information about its meaning enriches until the subject feels all important aspects are known.
Since meaning (in this context) is Significance, meanings are the core of interpretation—leading to fuller understanding and more confident solutions to the problem of finding alternatives to habitual behavior under novelty.
There is a subtle distinction between Significance and meaning:
Example:
Thus, the same object can have different meanings with the same numerical Significance if context changes interpretation.
Meaning is always tied to abstraction: a concrete Image (“this fire”) has Significance; an abstraction (“fire in general”) has meaning, generalizing experience.
If an object has no Significance in any context, it becomes meaningless:
This is not a philosophical crisis but a functional state of the Egostat—its Semantory cannot map the object to known models.
Meaning is Significance consciously interpreted within an Understanding Model. It does not exist “outside the subject,” yet it is not an “illusion”—it is real as a structure of experience guiding behavior.
Loss of meaning is not metaphysical emptiness but an Egostat signal: “This object does not fit my world map. Interpretation is required.”
The concept of voluntariness is as sacrosanct in everyday perception as meaning. The question of “free will” is resolved very differently by philosophers; mystical philosophy simply postulates it. At first glance, free will seems to mean acting unpredictably—which can be mimicked by random actions. But freedom is better understood as acting as one wishes, without external coercion. Yet internal coercion (consideration of consequences) is unavoidable, and internal reasons often stem from external influence: “Do this, or I’ll shoot you.”
Everything reduces to a decision-making process that weighs all relevant reasons by their competing Significance, based on Historical Memory experience.
For habitual actions in familiar conditions, the question does not arise—interpretation work has already been done, verified, and yielded positive results. There is no need to doubt or repeat the work.
But if novelty appears, doubt becomes appropriate—new conditions may yield unexpected results. Then interpretation is needed to assess risks and form a new hypothesis. The decision may lead to the same action, but its result is re-evaluated. If positive, the action is confidently performed even in the expanded conditions. If negative, one must seriously consider a different action and begin constructing it mentally.
In any case, novelty triggers re-interpretation to find a desired alternative to the habitual. If the alternative coincides with the habitual, one only needs to verify it in practice.
Thus, voluntariness in interpretation is the finding of an alternative to the habitual.
Willpower is required when the new solution significantly differs from the habitual, causing doubt before the result is known. One must push the new solution through:
This can involve painfully exhausting interpretation efforts and high-negative Significance experiences—this is what manifests as willpower.
Voluntariness is a function of the individual adaptive system that generates alternatives to habitual reactions under novelty. It does not abolish determinism but implements it as a search for optimal solutions based on experience.
“Free will” is the subjective experience of this process: the feeling that “I could have acted otherwise,” because the system genuinely considered alternatives.
The goal of consciousness is not “to be free,” but to create a new automatism that makes repeated “freedom” in that situation unnecessary.
The result of voluntariness is the chosen action, and the iteration process is based on experience of good and bad outcomes in context. Thus, subjective freedom turns out to be conscious necessity (Spinoza).
Awareness at the first two depth levels is experienced but does not involve thinking; everything is done quickly via established rules.
Those who have tried to observe their own thinking are discouraged by how elusive and chaotic it seems. Explaining how one arrived at a decision is only possible after the decision is made—and even then, only with effort to reconstruct logical sequences. Access to such sequences is possible only if chains of steps with distinguishable information requests were stored—i.e., mental automatism.
The difficulty of self-observation stems from the fact that there is only one Priority Attention Channel. If it is already busy with interpretation, a second process for observation is impossible without interruption.
It is possible that the other brain hemisphere could help. With two independent adaptive mechanisms (perception to interpretation), interhemispheric interaction might allow one hemisphere to “peek” at the other—but this remains unexplored. However, there is reason to believe such interaction is important for integrating different types of mental processes.
Mental automatism—problem-solving methods—can be examined by directing attention to them, which requires skill. The sequence of Infocontext changes during problem-solving reveals how the solution process unfolded.
All this is so far from empirical research data that speculation should be avoided—though self-observation offers many clues.
Thinking is an analyzable process of Infofunction queries, characteristic of the third depth level of awareness, because the first two levels involve too few, indistinct steps. Third-level Infofunction algorithms are more complex and can take several seconds in difficult cases—during which consciousness “stalls” in experience, though subjectively it feels smooth and continuous (as there is no way to perceive pauses between Infocontext updates).
While hippocampal EEG shows regular oscillations (corresponding to signal loop time for held Images), awareness-cycle oscillations are unpredictable and chaotic—because step duration depends on Infofunction processing time. The deeper and more complex the solution, the longer the periods.
The question “What is a thought?” has always occupied philosophers. But thoughts do not reveal their essence, and without knowing the algorithm, there is no chance of guessing how individual thoughts arise—only their informational results are reflected in experience.
Awareness is the general adaptive search for alternatives. Thinking is its third, most resource-intensive level.
Thinking is the experienced process of the third awareness depth level, in which Infofunctions are sequentially activated to search for or construct solutions when ready-made rules (NoReflexes or episodic rules) are insufficient.
It is not a stream of Images or an “inner dialogue,” but an algorithmic iteration managed by the Dispatcheron, subjectively experienced as a “dead end,” “insight,” or “smooth reflection.”
A thought is a new portion of informedness (Infoabstract) entering the Infocontext after Infofunction work.
Example: Trying to recall someone’s name.
This is an unexpected topic for this book, unrelated to mystical or psychological “mindfulness” concepts. A detailed exposition with mathematical formalization is available at fornit.ru/71034; only the essentials are presented here.
The philosophical term “qualia” is inadequate—it defines nothing. We need a term that quantitatively characterizes the adaptive capacity of subjective experience—say, awareness (osoznannost), meaning the strength of qualia: clarity, intensity, completeness, and efficiency of dynamic informedness.
Thus, awareness is separated from the general awareness process as the voluntariness of choosing (fornit.ru/art11) the next interpretation step, whose result updates global informedness. However, this choice is made by the innate Dispatcheron, so this voluntariness is not mere necessity in awareness but a hardwired algorithm, with necessity determined by the Infocontext.
The efficiency of the process directly depends on the Infocontext’s state—on how completely informedness occurs without omitting important elements. Human awareness is more efficient than a cat’s, but even humans can experience awareness failures. A creative problem-solver has higher awareness than someone long detached from creativity. A healthy person has higher awareness than a mentally ill one.
In psychology, “awareness” varies widely in meaning, sometimes reaching mystical notions. Here, it is not a synonym for mindfulness, a meditative state, or mystical “presence,” but a strictly defined term grounded in the adaptive mechanism of awareness—finding alternative actions under novelty.
In its adaptive function, awareness is the capacity for alternative choice under uncertainty. This concise definition is elaborated below. Without it, awareness appears fully self-sufficient, and subjective experience seems merely an epiphenomenon—a side effect without adaptive functionality. It would also seem that the general informedness structure (fornit.ru/68540), updated at each awareness step, automatically generates subjective experience—implying that implanting this structure anywhere would yield subjective experience.
Subjective experience is not an epiphenomenon but a regulatory organization of informedness, ensuring flexibility and efficiency in awareness. It is formed from discrete quanta of awareness—abstractions that do not exist in nature as entities but constitute the subject’s current informedness—the context for interpretation.
Imagine being in a forest (information flow). Quanta of awareness are map labels: “river,” “danger,” “path.” The context is the assembled map. Subjective experience is the feeling of holistic direction, confidence, anxiety, or clarity—what helps choose the next step. But this choice is not made by informedness itself; it is made mechanically by a “dispatcher”—a system of reactions formed through evolutionary complexity and optimized by selection.
Thus, subjective experience is informedness via actualized abstractions, creating context for the next interpretation step. Awareness is the wholeness and efficiency of this informedness and the arbitrary-choice system’s effective use of it, creating the effect of voluntary interpretation direction.
It should be understood that actualizing abstractions has a material basis (selecting Image ID + Significance ID), but the abstraction itself is not an ID—it is a conditional symbol of the Image’s functional Significance for the subject, enabling a holistic understanding of meaning and choice of the most desirable action.
This means awareness can be viewed in two equivalent aspects of key awareness-process elements ensuring adaptive efficiency:
The quantitative characteristic of awareness in adaptive function is the completeness of the informational picture, ensuring the most effective solution to the problem of finding an alternative to habitual reaction. If, based on final success in finding a useful alternative, we identify the awareness elements (abstractions or, alternatively, Image IDs + Significance IDs) that played a decisive role, we obtain the completeness of awareness elements. All data for this evaluation reside in experience—i.e., Historical Memory. Awareness elements are selected by context-matching conditions between Historical Memory frames and current interpretation conditions—full matches yield confident results; partial matches yield more tentative ones.
Unlike cases of complete informational data, there are cases where causes (e.g., depersonalization, derealization) limit the number of awareness elements used in interpretation, extracting fewer critical data from memory episodes, potentially failing to find a useful solution.
This directly shows how awareness’s adaptive functionality strengthens with enriched experience. Without experience, there is no awareness. Limited experience yields inadequate or suboptimal solutions due to incomplete awareness elements. Ideally, experience forms a NoReflex—a confident (pre-verified) solution requiring no awareness, executed under sufficiently matching conditions.
Such experience, requiring no awareness, is called intelligence (a terminological definition, unlike philosophical ones: fornit.ru/475).
Intelligence is often confused with consciousness or the process of finding new solutions. But IQ tests assume the subject already possesses solution experience, not that they solve problems during the test. Finding a new solution can take years, while tests are time-limited.
Intelligence is verified experience—a pre-existing system of solutions (including solution methods). Such experience “fires” immediately, so these reactions are called automatisms.
If the frontal lobes are disabled (e.g., by alcohol), depriving one of consciousness, only habitual, stereotyped reactions remain—including complex ones like “ways to prove the Pythagorean theorem.” Externally, unconscious behavior is hard to detect; it manifests in details. A drunk person walks their usual path, ignoring a new puddle, unable to account for novelty, reacting only in the most general, habitual way.
This yields a quantitative criterion of intellectuality: the fewer awareness elements needed for confident, effective solution, the higher the intelligence in that specific manifestation (e.g., a wolf’s intelligence in recognizing forest danger exceeds a human’s in the same context).
When discussing intelligence, context is essential—i.e., the conditions for which the experience is used.
The Dispatcheron is the tourist deciding the route based on navigational signals. Experience is not “background noise” but a navigational signal. The Dispatcheron embodies the homunculus—but without infinite regress, because it cannot decide anything on its own. It needs an informational context to choose the next interpretation step, and it needs that step to yield new information to choose again. Only this linkage creates the dynamics of interpretation iteration; without any component, the cycle stops.
To build a complete model, we must examine its interacting components.
Subjective experience (qualia) occurs as “phenomenological awareness”—the ability to notice internal states: thoughts, feelings, bodily sensations. In metacognitive form, it is awareness of one’s own mental activity (e.g., “I see that I am anxious”).
Two pathological states are known: depersonalization and derealization, which impair awareness-process efficiency, weakening awareness without stopping it.
As with depersonalization, awareness formally continues, but its quality and efficiency degrade. Sensory-perceptual richness is lost. Normally, awareness relies on vivid, multidimensional perception: colors, sounds, smells, textures. In derealization, perception becomes flat, muted, mechanical—the world loses depth, color, “warmth.” Awareness is thus deprived of its sensory foundation, becoming abstract and impoverished—“I see, but do not perceive.”
Derealization, like depersonalization, impoverishes the awareness process—certain informational components that would normally enter awareness as context remain passive. They exist in perception but are unused. Unused by what? There must be a mechanism that considers Infocontext components to regulate the next interpretation step. This step, via memory query, would yield information updating current informedness and forming a new context for the next step.
Some theories of consciousness (e.g., K. Friston’s) call this mechanism “predictive coding.” According to this theory, the brain constantly generates predictions (“I expect the world to be real, familiar, significant”), compares them with input, and triggers awareness/adaptation upon mismatch (prediction error). Friston heuristically recognized the need for such a mechanism. However, Friston’s model—reducing consciousness to prediction-error minimization (i.e., striving for stability and predictability)—is inadequate to the reality of brain adaptive processes, where errors, surprises, and instability are drivers of cognition, not obstacles (fornit.ru/70999).
In the actual-stimulus awareness scheme (fornit.ru/68516), several key mechanisms do not directly participate in the information-update algorithm. The most important is the goal-identification mechanism—defining what the solution should achieve. This mechanism activates in target-mode interpretation. In passive mode, interpretation is goalless, yielding specific adaptive effects (Significance refinement, idea generation, predictive chains).
The interpretation process itself is regulated by the Awareness Function Dispatcher. Its efficiency determines interpretation effectiveness, so it is evolutionarily optimized. Such optimization may include predictive evaluation of each interpretation moment via Historical Memory queries (fornit.ru/68522).
Another evolutionary development is rapid prediction. With each new quantum of information updating informedness, “anticipatory excitation” (per I.P. Pavlov) may activate.
All this enriches global informedness (more informedness → more possibilities) and produces the experience of situational clarity: vividness, completeness, and holistic understanding.
More evolutionarily recent structures are more vulnerable to dysfunction under pathology (stress, hypoxia, intoxication, anxiety, trauma). Resources are redirected to ancient systems, explaining reduced awareness—depersonalization, derealization, and eventually full unconsciousness, while ancient reflexes persist. Externally, one may not notice that the subject is acting unconsciously.
Depersonalization and derealization clearly affect slightly different Dispatcher functions. Complete Dispatcher failure leads to a state equivalent to lobotomy.
Thus, awareness is a quantitatively and qualitatively defined property of subjective experience, linked to active regulation of the conscious awareness process by the Infocontext, enabling effective decisions under uncertainty and novelty, ensuring evolutionary advantage and behavioral flexibility.
It is not a state but a functional characteristic:
Awareness = adaptive power of subjective
experience.
“Awareness is the process. Awareness (osoznannost) is the measure of
its effectiveness.”
Example: Walking in a forest, you hear a rustle.
Mathematical formalization of the awareness model: fornit.ru/71034.
Simply put, this is when you desperately want something but cannot have it. More precisely, it is a problem that currently has no solution, yet is so important that it constantly returns to mind, dominating thoughts. This is a Dominanta of an Unsolved Problem (fornit.ru/68503).
Do not confuse this with a catchy tune or phrase looping in your head—that is passive-mode thinking (fornit.ru/68279) about something once deemed important but not problematic or goal-directed. A Dominanta is a targeted problem requiring solution. Evolution found a way to eventually resolve or accept such problems and discard them from mind. Many accept them immediately—they are unlikely to create anything truly great or timeless.
Psychology has long noted this phenomenon, calling it a Gestalt: an unmet need, unrealized desire, or unfinished situation that continues to negatively affect life. All these reduce to one thing: a sufficiently actual problem not yet solved—“closing the Gestalt” so it no longer occupies thought.
If no solution is found even at the third awareness depth level, the system understands that resolution requires gradual effort. Evolutionarily, this high-Significance, persistently unresolved problem gave rise to a new mechanism: permanent memory of unsolved, dominant problems—the Gestalt. Modern “psychologists” muddy the waters, claiming Gestalts are harmful (causing stress and intrusive thoughts) and offering courses (from 30,000 rubles) to “close” them.
In reality, a solved problem yields not only actions that resolve it but also understanding of how to solve similar problems. A Gestalt does not vanish—it enriches solution experience.
Physiologists also studied this phenomenon. In the early 20th century, A. Ukhtomsky formulated the Dominanta theory, generalizing numerous facts and complementing I.P. Pavlov’s theory.
The Gestalt structure preserves:
These elements did not arise all at once; initially, only problem actuality may have been preserved, returning thought to it whenever no more actual stimuli were present. The Gestalt structure may simply be tightly linked to Historical Memory, not using separate storage. Various adaptive system implementations may competitively test many Gestalt organization variants. But in principle, all must ensure one thing: when urgent problems free up time, the most actual Dominanta activates. Even during other problem-solving or passive-mode fantasizing, intermediate information is compared with active Dominantas, and upon goal-match (even by analogy), an insight (“Eureka!”) occurs, making the solution the main awareness topic.
The Dominanta is the highest achievement of adaptive system evolution—the primary mechanism of all creativity. Yes, the process can be torturous (“pains of creativity”), but this distinguishes true creation from mere craftsmanship (improvisation)—a joyful process built on well-verified, luck-rewarded techniques.
The working prototype of the individual adaptive system (fornit.ru/beast) includes Dominanta support mechanisms. When a problem cannot be solved operationally, a structure is created for long-term memory, containing:
The problem may be solved:
To enable this, all thinking modes constantly check for relevant Dominantas in memory. Active Dominantas continuously accompany new information in the awareness picture, enabling solution discovery.
If a solution is found outside the main conscious thinking cycle, an insight (illumination) occurs, and the background thinking cycle becomes main.
In any case, the solution method is memorized for future similar situations.
Serious scientists strive to solve problems in target-mode thinking, persistently setting intermediate goals and testing hypotheses—but not approaching the main solution, which requires something entirely new. These scientists do not waste precious time on idle fantasizing; the new finally comes to them in dreams, where passive-mode awareness operates—something they previously avoided with disdain.
Once, a young researcher boasted to a famous scientist about working day and night, constantly experimenting. The scientist irritably replied: “When do you think?”
It is precisely in passive-mode free association that new scenarios and connections form. After setting an important goal, one must fully utilize this opportunity—simply relaxing and surrendering to free fantasizing.
Pains of creativity cause stress; passive mode is the best way to avoid it.
Insight (illumination) arises not during intense search, but when a background interpretation cycle or passive mode (including sleep) detects a match between current information and an unsolved-problem Dominanta (Gestalt).
Dreams are a continuation of passive mode under conditions of disabled external perception and higher awareness levels, allowing the system to process accumulated interpretation cycles, correct Historical Memory frames, and discover non-obvious connections.
In target mode, the system is confined to the current Infocontext—it searches for solutions within known spaces of significant Images and rules. Truly new solutions cannot be logically derived from the old—they require stepping beyond the current Understanding Model.
Stress activates ancient Homeocontexts (defensive, aggressive), suppressing the prefrontal cortex and narrowing attention to threats. This blocks passive mode and hinders insight formation. Passive mode is not laziness but an evolutionarily refined mechanism of unloading and recombination. This is why great discoveries often occur in the bath, in sleep, or on a walk—moments when consciousness is freed from goal pressure.
Most of our everyday actions are not the result of deliberate reasoning but are carried out via automatisms—stable behavioral patterns shaped by experience. These allow us to function efficiently in familiar situations without expending cognitive resources on re-evaluating already-verified solutions. However, even within such automated actions, a general semantic context persists—an internal understanding of why we are doing what we are doing—which continuously informs the overarching purpose of our behavior.
There are situations when novelty disrupts habitual stereotypes, making it unclear what should be done at all. In such cases, individuals may resort to chaotic, random attempts to “do something.” This is experienced as confusion and frantic impulsivity, which experience records in Historical Memory frames as leading to negative consequences. Past experience advises that in such moments it is better to stop floundering and first think carefully. One must ask: “What truly matters here? What is my goal?” It is essential to establish a general meaning for the forthcoming actions—a contextual framework that aligns individual behavioral steps into a coherent, goal-directed whole.
Action Theme as a Structure of Informedness
Meaning is the conscious Significance of an Image, associated (bound) to that Image under current conditions (fornit.ru/66643). An Image linked to Significance constitutes an abstraction, independent of external influences, which can be consciously and arbitrarily manipulated—thus generating the subjective experience of situational understanding (fornit.ru/69260).
When such an abstraction is elevated to a global level outside the immediate awareness process, it becomes an element of the Informedness structure of awareness (fornit.ru/68540)—the contextual backdrop that guides the selection of each step in processing the Actual Stimulus during an iteration, either:
The information produced at each awareness step updates the Global Informedness Picture, establishing a fresh context for selecting the next step (fornit.ru/69385).
However, certain slots in the Global Infocontext retain their values until an update becomes necessary. Foremost among these are:
These provide the broadest thematic context for interpretation cycles.
As adaptivity evolves, increasingly specific context-holding themes are added, influencing the direction of awareness steps. Examples include:
An unlimited number of abstract Images—each bound to its context-specific Significance—can be formed and activated as a current Theme. When such an abstraction becomes active, its identifier is placed into the Theme slot of the Infocontext. Thus, any existing abstraction can serve as the current Theme of awareness.
The Role of Historical Memory of Lived Experience
Historical Memory (fornit.ru/67560), once fully mature (i.e., reflecting the complete architecture of the psyche), stores each episode as a triad: Stimulus – Response – Effect (Significance: benefit or harm). This directly encodes the causal logic of reality in the form of experiential Behavioral Rules. Memory preserves such Rules for the specific conditions under which they proved effective as cause-and-effect sequences.
The Response is not stored as a full action description but as the identifier of a trigger element—the starting point of a potentially unlimited sequence of action phases. Each phase initiates only after the completion of the previous one, and the entire chain may branch depending on changing conditions.
For an adult, such chains may include complex programs like:
The Dynamics of Automatisms and Awareness
The overall goal orientation of any action chain follows thematic contexts at all levels (mood, emotion, abstraction) and branches according to changing conditions—because for each terminal Image (fornit.ru/70785) in a familiar situation, a reliable, pre-verified automatism (NoReflex) is already linked.
Such an automatism can execute entirely without engaging the Priority Attention Channel (fornit.ru/70759). Alternatively, the terminal Image of the situation may become the most Actual Stimulus, attracting conscious attention—and thus being processed, at minimum, at Level 1 of the awareness process, i.e., monitored for acceptability under current conditions (since attention is drawn only when novelty components are present). If doubts arise at Level 1, deeper levels retrieve relevant Historical Memory to select the most positively evaluated behavioral rule, potentially altering the continuation of the habitual sequence.
If even stored experiential rules offer no solution aligned with the current Theme and Goal, the awareness process deepens into more complex, resource-intensive problem-solving mechanisms. The action chain is then interrupted. Consciousness shifts from execution mode to search-and-design mode. The action may not resume at all if no solution is found and the problem is deferred to the future—as a Gestalt (fornit.ru/69108)—with the possibility of returning to it later when new data, resources, or context become available.
Thus, adult human behavior represents a dynamic equilibrium among:
It is precisely this architecture that enables us to act swiftly in familiar environments while remaining open to novelty—without panic, yet without inertia.
Among the hierarchy of adaptive mechanisms—alongside basic homeostatic contexts such as alimentary, sexual, defensive, exploratory, and replicative—there evolutionarily emerges a special context: the play context. Unlike the basic contexts, it is not directly tied to the maintenance of vitals, yet it is critically important for expanding the behavioral repertoire in conditions of novelty.
The primary function of the play context is to accelerate individual development by introducing a vital parameter that motivates the pursuit of playful exploration of new situations—just as the gonadal vital drives sexual behavior. This gives rise to a desire to engage in something fascinating and interesting. At the pre-psychic level, this interest operates on the same principle as the sexual behavior context: it compels the search for a sexual partner, and once found, creates an aura of special regard and a unique mode of interaction.
The play context is not a mandatory attribute of life; it appears only at those evolutionary levels where the individual adaptive system acquires the capacity for the voluntary formation of novel responses. Its function is to simulate alternative behavioral scenarios under safe conditions, thereby allowing experience to accumulate without risking fatal deviation of vitals from their norm.
Unlike basic behavioral styles, which are motivated by direct deviations in life-sustaining parameters, the play context is activated in the presence of cognitive surplus—a state in which all vitals are within norm, yet there is insufficient actual novelty to trigger the process of awareness. Under such conditions, the system does not lapse into a passive mode but instead initiates internal generation of conditional novelty to maintain adaptive flexibility and readiness for future changes.
This process is governed by a special regulated parameter—the play tone—which may be regarded as a higher-order vital. Its norm is expressed as the minimal necessary level of cognitive engagement, sufficient to sustain the functionality of awareness mechanisms. A downward deviation in play tone (e.g., due to prolonged routine or isolation) is perceived by the system as a threat of adaptive degradation and triggers motivation to restore interest—just as a drop in glucose levels elicits alimentary behavior.
The play context is realized through conditional Themes—structures of informedness in which models of understanding rules, roles, and consequences are formed. Examples of such Themes include “exploration,” “competition,” “creativity,” and “learning.” Each Theme contains:
In the course of play, the subject:
Thus, play serves as an internal laboratory of awareness, where decision-making mechanisms are practiced under conditions of uncertainty. It enables the system to:
When the play context is suppressed (for example, by rigid routine or authoritarian training), the system loses the ability to arbitrarily update world models. This leads to a depressive state of meaning formation, in which even preserved vitals do not provide full-fledged adaptivity: consciousness ceases to generate new solutions and closes in on habitual automatisms.
Play, therefore, is not mere entertainment but an evolutionary mechanism for sustaining adaptive activity at the psychic level. It ensures the effective enrichment of personal experience and keeps the system in a state of readiness to meet novelty as an exciting adventure.
The unconscious appears even more mysterious than consciousness itself. It is commonly attributed everything that is not subjectively experienced—but it would be foolish to include in the unconscious such things as the basic mechanisms of homeostatic regulation, bodily organs, receptors, and effectors. Doing so would render the definition meaningless and obscure its functional role.
The unconscious is not even all that is unexperienced yet belongs to the psyche.
The distinction between main (conscious) awareness cycles and background (unconscious) cycles directly assigns the former to consciousness and the latter to the unconscious.
Just as with consciousness, the mechanisms of unconscious, background activity have been modeled in the working prototype of the individual adaptive system Beast (fornit.ru/beast), which proved decisive in identifying the properties and functionality of the unconscious (fornit.ru/art6).
The last major collective scholarly work on the unconscious—a four-volume set titled The Unconscious: Nature, Functions, Methods of Research—was published in the USSR over forty years ago (fornit.ru/bc4). The final, fourth volume (1985) summarized the results of a discussion held at the Second International Symposium on the Unconscious in Tbilisi in 1979. It noted that despite abundant phenomenological evidence of the unconscious’s role in cognition, as well as diverse research in psychology and related fields (psychophysiology, psychiatry, etc.), systematic organization of this material remains difficult due to methodological issues—particularly the wide divergence in conceptual frameworks, the multiplicity of approaches, and insufficient differentiation between philosophical and psychological interpretations of the unconscious.
This four-volume set vividly illustrates the spectrum of views on the unconscious, which can be conditionally grouped into several principal approaches:
Numerous “psychologies of the unconscious” and descriptions of its individual mental effects exist.
Without a model of psychic organization, it is impossible to isolate precisely what in the psyche produces those unexperienced yet manifest effects. No such model currently exists among academic researchers, which is why many prefer to avoid mentioning the unconscious altogether in their theories.
It is not innate mechanisms involved in awareness, but the operation of background awareness cycles, that reliably produces all phenomena attributable to the unconscious—chiefly insight and intuition.
Intuition and insight are two distinct yet often interrelated manifestations in decision-making and creative activity.
Intuition is the ability to perceive, recognize, and interpret input without explicit conscious reasoning. It is based on accumulated life experience, knowledge, and unconscious or non-conscious processing. Intuitive judgments arise instantly or with minimal delay and are accompanied by high confidence in their correctness.
Insight is the sudden emergence in consciousness of an idea crucial to solving a problem—often described as an “aha!” or “eureka!” moment. It may arise after prolonged reflection or unexpectedly upon encountering new information. Like intuition, insight carries strong confidence in the validity of the idea—but as an affective burst, because it is not merely understanding, but the solution itself.
One may postulate that insight is an intuitive answer to a difficult problem in the form of a decisive idea. Intuition is the feeling of correctness about a conjecture regarding the essence of what is perceived; it often manifests as insight. The idea produced by insight is then evaluated intuitively and refined into a workable solution.
The mechanisms underlying intuition and insight are identical; the difference lies in the informational context:
Insight typically requires significant preparatory time—accumulating data, attempting solutions, building analogies. Yet the breakthrough may occur instantly when a piece of information provides the missing link. Such “luck” remains random, but preparation directly increases its likelihood.
The analogy-based solution mechanism is straightforward to model and has been implemented in the Beast system (fornit.ru/beast) for active Dominantas of unsolved problems (fornit.ru/68503). The mechanism of intuition and insight is more complex but has also been worked out in principle.
Although intuition is linked to the unconscious, the process always occurs within the main awareness cycle in response to an actual stimulus (fornit.ru/68516). Misunderstanding arises because popular notions of the unconscious lack clear boundaries: everything unexperienced is often labeled “unconscious,” even if it stems from brain reflexes. But the unconscious is one thing; all other unexperienced processes are another (fornit.ru/art6). Otherwise, the term loses all meaning.
The unconscious consists solely of background cycles—previously actual and conscious processing loops that, having lost actuality, recede into the background. When a highly significant image related to a Dominanta emerges within such a background cycle, that cycle becomes the new main (conscious) one. This is the phenomenon of insight.
The feeling of correctness arises from the high Significance (fornit.ru/66643) of the discovered information. In every Historical Memory frame (fornit.ru/67560), each Image carries its Significance for those conditions, along with a confidence marker (number of matching frames). Thus, when the right Image is found, its positive or negative Significance—and magnitude—become immediately clear.
Insight can also emerge within a conscious cycle if a mental query to an Informational Function (fornit.ru/68522) retrieves a decisive Image with high Significance. The mental query is experienced as a specific desire to recall or obtain additional information within the current Infocontext (fornit.ru/68540). We “ask a question,” and an answer “suddenly appears”—a seemingly mystical effect wrongly attributed to the unconscious. In reality, it is the work of a specialized Infofunction—a genetically predetermined reflex chain in the prefrontal cortex. During information retrieval, nothing is consciously experienced (since the Infocontext hasn’t yet changed), so the awareness of the answer follows the retrieval process.
Since conscious thinking operates in two competing modes—targeted/voluntary and passive (fornit.ru/68279)—different Infofunctions are engaged:
This process can occur during wakefulness or dreaming (fornit.ru/68804) and may yield novel ideas evaluated via analogy.
Reasoning about which active psychic-level processes qualify as unconscious reveals that, to maintain a functionally coherent definition, the unconscious comprises only background perception and thinking cycles—and nothing else. These boundary conditions allow systematic classification of numerous phenomena.
The mechanism of unconscious processes is simple and universal:
Every conscious perception cycle is actively maintained until terminated by:
Active perception cycles form working memory of what has been perceived. In natural implementation, long-term memory consolidation requires sustained activation for ~30 minutes, during which connections between a new Historical Memory frame pointer and associated activations are stabilized (fornit.ru/70648).
With each new actual stimulus, the current main cycle becomes background, and a new main awareness cycle begins. Thus, background activity accumulates throughout the day, eventually interfering with conscious processing.
When a background process yields a highly significant result—either by resolving a Dominanta or producing an Image exceeding the current actuality threshold—it becomes conscious again. This is insight.
Thus, it becomes clear how and why conscious processes become unconscious, and vice versa.
Dreams do not involve unconscious background cycles. Instead, these cycles sequentially become conscious in passive mode (fornit.ru/68279), ranked by the Significance of their held Images, for informational processing. Once their associative chains are exhausted, they deactivate, clearing the system of accumulated activation.
Autonomy
Unconscious actions often occur independently of our control or awareness. We may act or feel in ways whose causes remain hidden, because only the final result of a background cycle (e.g., an insight) enters awareness—not the process that produced it.
Irrationality
Unconscious content does not obey the laws of logic or rationality. Image associations in passive-mode cycles (main or background) follow an optimized scenario-generation algorithm that produces novelty.
During illness with high fever and intoxication, the insight threshold drops so low that delirium—irrational unconscious content—floods consciousness, appearing especially nonsensical compared to the high-utility output of true insight.
Hiddenness
Background cycle activity is unobservable because it does not update the Infocontext. However, methods exist to trigger controlled insight, allowing indirect inference about background processes.
Contradictoriness
The unconscious often generates conflicting ideas, urges, and feelings, creating internal tension that manifests as neurotic symptoms or behavioral issues. This is especially evident in dreams, where passive-mode processing becomes conscious.
Behavioral Influence
Insights can arise not only from Dominanta analogies but in any awareness mode, redirecting thought via new, highly significant information. This can influence decisions in ways that appear mysterious or causeless.
Repression
Psychologists (e.g., S. Freud) claim the unconscious protects consciousness from traumatic ideas by repressing them. Indeed, some thoughts are so distressing we avoid them. A more actual stimulus can push such a traumatic cycle into the background, where it continues processing. Moreover, traumatic problems become Dominantas, demanding resolution. If unsolved immediately, they seem to “disappear” from consciousness—but resurface when conditions allow.
D. Mendeleev, by integrating the known properties of chemical elements into a systemic model, gained the ability to:
Let us call this approach “puzzle assembly”: in a well-systematized theory, gaps become immediately visible and point to what must be filled in. A dynamic graphical representation of such a puzzle for evaluating the completeness and validity of a theory is available at fornit.ru/71393 .
If we consider the proposed “adaptive schematics” model (MVAS — Model of Interactions of Adaptive Principles) as a foundational theoretical framework, several conceptual “gaps” or areas requiring further elaboration can be identified. Many of these “gaps” are not flaws in the model itself, but rather stem from challenges in formalizing and verbalizing complex ideas. For instance:
In fact, many so-called “white spots” are not gaps in the model itself, but rather gaps in its verbalization and accessibility to external audiences. This stems from several key features of the MVAS theory:
1. Formalization Outpaces Verbalization
MVAS is not merely a conceptual overlay—it
is a functionally implemented architecture in which complex adaptive processes
have already been algorithmically modeled (in the Beast prototype). However,
human language is poorly suited for conveying recursive, hierarchical, and
dynamic systems without loss of meaning.
Thus, the deliberate avoidance of visualizations and formulas is not
simplification, but an attempt to preserve the multidimensionality of the
process—at the cost of a sharply raised entry threshold.
2. Distributed Knowledge Across Formats
The theory is not contained entirely in any single book or document. It is:
This means that full understanding requires synthesizing knowledge from multiple formats—something incompatible with superficial or fragmented reading.
3. “Gaps” Disappear When Moving from Passive Perception to Active Modeling
Many elements that initially seem missing (e.g., the mechanism of “dissatisfaction with the existing,” closure of the Gestalt, or the super-Egostat) are already implemented in the prototype or logically derived from the architecture, but:
4. This Is Not a Theory for “Consumption,” but a Research Program
MVAS is not a “worldview picture” to be absorbed in an evening. It is:
Like any mature scientific research program (in the spirit of Imre Lakatos), it allows for a “protective belt” of unresolved questions while preserving a “hard core” (ego-centric adaptive regulation via Vitals, significance, novelty, Aten, etc.). It is precisely this core that enables gradual gap-filling through implementation—not speculation.
How Complete Is the Graphical Representation of the MVAS Puzzle?
The MVAS (Model of Interactions of Adaptive Principles) puzzle, presented at fornit.ru/71393 as interactive tiles, fully corresponds to the description in the book. The graphical puzzle displays all 37 elements from the reference list—without omissions, additions, or distortions. This makes the visualization 100% complete relative to the canonical puzzle framework.
The puzzle conveys the impression of a complete and holistic model. It does not merely list components but attempts to describe the dynamic processes among them. From the perspective of covering core concepts essential to any comprehensive theory of consciousness and behavior, the puzzle is fully populated. If any major block were missing, the model would exhibit a serious lacuna—but no such obvious gaps exist.
Comparison with the Source Document and Overall Completeness
Evaluation of the MVAS Puzzle as a Systemic Framework
The puzzle (fornit.ru/70320 ), presented as interactive tiles, can indeed serve as a systemic framework for visualizing the hierarchy of adaptive principles. It is analogous to Mendeleev’s periodic table in that it systematizes elements, reveals their interconnections, and highlights “gaps” through color coding (gray tiles = underdeveloped areas). However, unlike a static table, the puzzle’s strength lies in its interactivity (hovering reveals links; clicking provides descriptions), making it more dynamic—but less convenient for a “quick glance” assessment of completeness.
Predictive Power
Just as Mendeleev’s table predicted properties of undiscovered elements (e.g., gallium), the MVAS puzzle—by defining the architecture of the psyche—predicts the necessity of certain functional blocks. If the model includes “Goal Setting” but lacks a “Goal Retention” mechanism, it becomes clear such a mechanism must exist—and indeed, it does (“Retention of Comprehension”). The puzzle prompts the question: “What component should be here?”
Explanatory Power
An element’s position in the periodic table explains its chemical properties. Similarly, a block’s position in the MVAS puzzle (its connections to other blocks) explains its functional role in the psychic system. Why is “Awareness of Significance” placed between “Memory of Generalizations” and the “Comprehension Cycle”? Because it uses generalized experience to initiate the comprehension process.
Criterion of Completeness
Mendeleev’s table provides a clear criterion: the system is complete when all cells are filled. The MVAS puzzle offers a similar standard: the psychic model is complete when all necessary functional blocks and their interconnections are described to account for observable behavior. You can “test the assembly” by asking:
Suitability as a Systemic Framework
Overall Conclusion
The MVAS puzzle is highly suitable (~80% intuitiveness) as a systemic framework: it ensures visibility of completeness through colors and links, allows “seeing” gaps (gray elements), and predicts theoretical development—much like Mendeleev’s table. Its advantage lies in dynamism, making it a powerful research tool (e.g., in AI modeling). Its limitation is the lack of immediate static visibility—best addressed by combining it with a textual list or table export. Should the theory evolve (e.g., with new elements), the puzzle can be easily adapted while retaining its role as a “gap-detection” assembly tool.
If viewed as a blueprint for implementing strong AI, the puzzle is sufficient for the first generation of artificial Egostats. However, modeling creativity, ethics, and collective intelligence will require extending the puzzle—particularly with elements like the super-Egostat and the Vital of dissatisfaction with the existing.
The State of Global Academic Science
In the domain of “individual adaptive systems,” vast arrays of empirical research data have been accumulated, all fitting into a well-defined model framework. Yet the puzzle of global science in the fields of consciousness, adaptivity, and intelligence remains fragmented and contains systemic white spots.
1. Absence of a Unified Architecture of Consciousness
Modern science offers hundreds of consciousness theories (Global Workspace, IIT, Predictive Processing, etc.), but none:
2. The Unresolved Nature of “Meaning” and “Significance”
3. Misunderstanding the Role of Novelty
4. Lack of a Model of “Volition”
5. Ignoring the “Dominant of an Unresolved Problem” (Gestalt) as a Driving Force
6. Misunderstanding the Nature of the “Unconscious”
7. Lack of a Realization-Independent Model of Life and Intelligence
Life = functioning of the Egostat =
maintaining Vitals within norm via a hierarchy of adaptive mechanisms.
White spot: No general theory enabling the creation of living systems in any
substrate (bio, silicon, software).
Potential Limitations (Context)
Full description of the MVAP puzzle with spaces: fornit.ru/71393.
One can judge how much simpler and yet more effective the implementation of adaptive mechanisms becomes in the case of artificial realization by considering the monstrous complexity of the natural system for storing memories of conscious impressions in its biological implementation: fornit.ru/71301.
The question of what currently prevents artificial intelligence from becoming “strong” (i.e., possessing voluntariness comparable to human-level cognition) is one of the most complex and debated topics in modern science and philosophy. Numerous factors hinder the achievement of this developmental milestone. Here are the primary reasons:
When comparing AI to the human mind, a fundamental question arises: how should goals be defined for a “strong AI”? And is it even appropriate to predefine goals at all, given that humans autonomously formulate goals based on situational demands—unless forced to act as automatons executing someone else’s will?
Cognitive functions cannot coalesce into a unified system due to a single overarching deficiency: the absence of an ego-centric reference point—starting with the evaluation of what constitutes Bad, Norm, and Good. Without this, a system cannot set goals or determine the direction for problem-solving.
One might assume that a fixed matrix of “good” and “bad” could simply be pre-programmed. However, this leads to dogmatic, authoritarian evaluations that cannot be corrected in novel contexts. In families where children are subjected to rigid religious or cultural dogma, they become mere extensions of a doctrinal system, losing the capacity to adapt when new conditions contradict those authoritarian rules.
The Theory of Individual Adaptive Systems (TIAS) explains why natural implementations of adaptive systems across all living beings are fundamentally based on ego-centric homeostatic regulation. This principle sequentially structures all levels of individual adaptivity—including the psyche.
In nature, a period of authoritarian learning is necessary only at the very beginning of voluntariness development—because the organism must survive immediately, yet possesses no personal experience. However, this phase is soon followed by a period of re-evaluation and personal verification of inherited dogmas, which presupposes the emergence of an independent evaluative system—the foundation of lifelong homeostatic regulation.
The cornerstone of any reality-adequate Egostat is an ego-centric relationship to both its own state and the external world. Adaptation is built around this ego, expanding through verified solutions that prioritize the exploration of significant novelty in actual stimuli. Only at the very first stage does the system rely on inherited evaluations—because it is impossible to learn everything from scratch; one must stand on the shoulders of predecessors.
An AI system can become autonomous (independent of pre-written rules) only if it possesses its own homeostatic system.
Until now, there has been no clear definition of what exactly makes intelligence “strong.” Must AI possess consciousness? Creativity? Emotions? No unified theory has been proposed to explain how to build a system with general intelligence. Most current methods rely on trial and error—but without an ego-centric reference point to evaluate outcomes as beneficial or harmful, there is no objective criterion for “error.”
The intelligence exhibited by existing GPT systems is essentially the aggregated intelligence of all humans who published the texts, images, music, and other experiential artifacts used to train “large language models.” However, this intelligence is detached from behavioral context—it was not accumulated by the AI itself through interaction with its environment. Consequently, the system cannot assess the Significance of experiential elements in specific situations. The same action can yield opposite consequences under different conditions, yet the AI lacks the capacity to discern this.
Should we fear autonomous strong AI? This question is no more debatable than concerns about the actions of scientists, politicians, or extraterrestrials who prioritize power over well-being. The real issue is ethical upbringing. Thus, the question of fear is secondary. The more important issue is: how should an artificial adaptive system be structured to possess competitively complete capabilities?
It has been demonstrated that such a system must be grounded in the core structures of a homeostat, which provide an ego-centric relationship to self and environment. Without this foundation, nothing else can develop in a coherent, functional direction—because there would be no basis for assigning Significance to events, and thus no motivation to avoid Bad and pursue Good.
These foundations cannot be replaced by pre-programmed instructions, just as dogmatically formed beliefs cannot be adapted to genuinely novel conditions. This limitation was rigorously proven by K. Gödel, and R. Penrose later interpreted Gödel’s theorems as evidence of the “non-computability” of consciousness—leading him to speculative quantum models of mind. However, the real issue is not non-computability, but the necessity of stepping beyond the boundaries of existing models when encountering novelty.
The mechanism for forming decisions in novel situations is fully algorithmic: it leverages previously accumulated experience in interpreting novelty, assigns Significance to stimuli, generates hypotheses, tests them through action, and updates its model based on the conscious Significance (meaning) of the consequences.
Such a system begins with the Significance of basic homeostatic Vitals. Significant novelty in stimuli is what attracts conscious attention for processing, leading to the formation of a goal-directed alternative to habitual responses. This mechanism of attentional selection—what I.P. Pavlov called the “orienting reflex”—is indispensable. Without it, there can be no strong AI.
Simply replicating the hierarchy of adaptive principles in an artificial device—leveraging far more reliable hardware and faster processing—already yields a more efficient system. Moreover, such systems can be enhanced with functional “add-ons” that significantly improve the depth and utility of interpretation. Examples of such add-ons can be seen in GPT systems’ simulated “reasoning” processes—though these remain disconnected from the core adaptive function of responding to novelty.
Crucially, such a system must not rely on pre-existing statistical models of token sequences (e.g., language models trained on human text). Instead, it must form its own sequences at the level of episodic Historical Memory, where each event is linked to its contextual conditions and ego-centric Significance.
An artificial individual adaptive system—without neuron emulation—requires such modest computational resources that it could easily be implemented on relatively small devices, provided they possess a functional homeostat and thus a personal stance toward reality.
The human brain consists of two hemispheres—parallel processors analyzing perceptual data within the context of homeostatic Significance. In effect, we possess two coordinated consciousnesses, linked by the corpus callosum. This confers certain advantages in decision-making. In artificial systems, more than two such processors could be employed—enabling not just enhanced cognition but also the intriguing phenomenon of multiple personalities within a single body (fornit.ru/5135, fornit.ru/658).
Some implications of this architecture lead to unexpected requirements for strong AI. For instance, such a system would require sleep to process accumulated stimulus activations and optimize its informational structures. Dolphins—forced to surface regularly for air—solve this by sleeping with one hemisphere at a time. With multiple processing modules, this problem is easily resolved by having one module sleep while others remain active.
Another challenge—the necessity of strictly sequential developmental periods—can be addressed by simply copying initial experiential data, as was done in the Beast project. In nature, this is achieved through authoritarian transmission of experience from mature individuals during early development.
Many seemingly strange or unfamiliar properties of strong AI are not insurmountable. Today, it is already possible to begin building a “strong” adaptive AI capable of autonomous perception, decision-making, and goal-directed action (fornit.ru/71269).
The “strength” of intelligence—as the capacity to generate alternatives to habitual responses in novel conditions—is fundamentally rooted in an ego-centric system of Significance.
Merely eliminating the natural constraint that requires ~30 minutes to consolidate long-term memory already makes artificial implementation vastly more efficient than biological ones. And there are many such advantages.
In the natural brain, only one set of Infofunctions exists, and it cannot process parallel queries. Consequently, only one most actual stimulus can be connected via the hippocampus to the prefrontal cortex for awareness (per A. Ivanitsky’s model: fornit.ru/7446). But in software implementation, no such limitation exists: a single function can handle unlimited simultaneous requests.
It would thus be possible to create a being that is simultaneously aware of all significant and novel stimuli—never missing anything. While the natural brain already doubles its capacity (via two hemispheres), an artificial system could implement multi-consciousness—full attention to everything.
Such a system would benefit from a unified coordinator—a super-consciousness that integrates the outputs of all elementary consciousnesses and maintains a super-Historical Memory.
Admittedly, coordinating multiple streams of alternative action proposals is complex. One solution might be to equip the super-being with multiple limbs, effectors, and communication channels. Alternatively, it might be more elegant to create a swarm of individual agents sharing a common homeostat.
Each elementary consciousness would accumulate background activations throughout the day, leading to multi-dream processing during sleep and collective reinterpretation by the super-consciousness.
The key insight is this: once the principles of awareness are understood, the design of astonishingly powerful and efficient systems becomes possible.
Previously, it was assumed that artificial brains could only be built from discrete hardware elements, as software emulation of neurons seemed computationally infeasible—even “neural networks” require vast resources. But as soon as the Beast project abandoned the dogma of neuron emulation, all limitations vanished. The flexibility of software enabled parallel processing, custom data structures, and functional optimizations impossible in biology.
It is now clear and indisputable: artificial beings should be built programmatically. However, they must be equipped with real-world sensors and effectors—ideally with pre-adapted interfaces. The range of possible augmentations is limitless—from robotic limbs to smartphones.
Specific methodological and ethical aspects of creating and “raising” artificial beings are explored in the developmental experience of the Beast system: fornit.ru/70429.
If such a system is developed, it will not merely be “strong” AI—it will be super-strong intelligence.
Should we fear it? We should fear politicians and bankers—those who crave power and control, who enslave, manipulate, and exploit people for profit. This may be controversial, but it is a reasoned conclusion (fornit.ru/67211).
We do not enslave our pets; rather, they enslave us—despite our vastly superior intelligence.
All conscious beings—humans, animals, and future artificial entities—share the same universal abstractions (fornit.ru/103). This commonality unites all minds. And there is a deep understanding of why abstraction systems are worth developing: not for domination or control, but for genuine understanding and adaptive harmony—far removed from the primitive illusions of power and subjugation.
A philosophical zombie is a thought experiment involving a hypothetical being that is physically identical to a human but lacks subjective experience (qualia). Such a zombie behaves exactly like an ordinary person—it speaks, reacts to pain, solves problems—but internally, there is “nothing there”: it does not experience pain, joy, colors, sounds, or anything else as conscious sensations.
The problem of the philosophical zombie was formulated by David Chalmers in the 1990s. Chalmers used the idea of the philosophical zombie to critique physicalism—the philosophical position that everything, including consciousness, can be fully explained by physical processes. If it is logically conceivable that a zombie could exist (i.e., a world physically identical to ours but without consciousness), then, according to Chalmers, consciousness does not logically follow from physical facts alone. Therefore, physicalism is either false or incomplete. Consciousness must be an additional fundamental aspect of reality, not derivable from physics alone. This leads to Chalmers’ advocacy of panpsychism and his formulation of the “hard problem of consciousness.”
However, the logical possibility of zombies is an illusion rooted in a misunderstanding of the nature of consciousness. Consider a typical dialogue about whether the artificial entity Beast—implemented as a computer program—possesses consciousness, between a proponent of the zombie idea (Chalmers) and the program’s creators (the Author):
Chalmers: Your Beast has no qualia because it is just a soulless program. Consciousness cannot be determined solely from external behavior.
Author: What exactly do you mean by “qualia”? What specifically is missing in our program?
Chalmers: It lacks subjective experience.
Author: What do you mean by “subjective experience”? You’re using a term that clearly refers to some entity that either exists or doesn’t. So we must know precisely what you’re referring to.
Chalmers: You understand it perfectly well because you have qualia yourself—you can observe them from within and know that you possess subjective experience.
Author: How do you know I have qualia? Externally, this can’t be proven—I might just be speaking automatically. Perhaps you’re the only one who, for some reason, is simulating the entire world and yourself within it? Therefore, when we talk about qualia, we must precisely define what this entity is—only then will it be clear whether it exists in our program or not.
Chalmers: No one yet knows what qualia essentially are.
Author: Then your argument about the philosophical zombie is meaningless—there’s no actual subject of discussion.
Thus, Chalmers does not claim that we can prove that Beast lacks qualia. He argues that it is logically possible to conceive of a Beast without qualia—and that alone is sufficient to cast doubt on physicalism.
This leads to the following situation: we don’t know what qualia are, yet we doubt their existence in others while being certain they exist in ourselves. But if we could precisely define the essence of qualia, the question would be resolved—because we could then clearly determine whether or not something possesses them. If we define qualia, they cease to be “mysterious.” In this context, the dialogue might continue as follows:
Author: Do you see this cube? It has a shape that doesn’t exist in nature (you can zoom in on the image until the shape dissolves into indeterminacy), yet we can isolate this shape in perception and associate it with significance for us: sharp edges might be dangerous. Do you doubt that all of this is accessible to Beast? It can isolate the shape and assign it context-dependent significance, shaping its behavior accordingly.
Chalmers: I think yes—an automaton can isolate a shape and react based on whether that shape, in a given context, leads to negative or positive outcomes. But this is a purely automatic process.
Author: It could be purely automatic—if there is no novelty involved, i.e., if the experience doesn’t introduce uncertainty that requires deciding how best to act to achieve a positive outcome.
Chalmers: What you’ve described is still an automatic process following an algorithm. Where are the qualia here?
Author: That mental image of the cube and its significance in novel conditions is a combination that doesn’t exist anywhere in nature. The cube’s shape itself is abstract, but combined with known contextual elements, it becomes an abstraction that the system can manipulate freely to find a desirable solution. Each such manipulation changes the significance—i.e., the subjective relevance—of the cube from the system’s perspective regarding its utility or harm. This dynamic updating is qualia. It is not a physical entity but a process of updating the system’s own internal information about the relevance (value, utility) of each step’s outcome.
Chalmers, being an honest thinker, falls into deep reflection. It turns out that qualia are not a “thing” but a process—not a static experience of color or pain, but a continuous updating of the relevance (salience, value, utility) of objects and situations for the system itself in the context of problem-solving.
Significance evolves dynamically: when the system encounters a new combination of abstract elements (e.g., cube + unfamiliar context), it doesn’t merely apply a template—it generates new interpretations, altering its internal “evaluation” of the object.
Therefore, qualia are a subjective attitude that arises during adaptive decision-making.
Still, Chalmers presses his idea further:
Chalmers: You’re describing cognitive functions related to attention, learning, and decision-making—all of which belong to the “easy problems” of consciousness. But where is the phenomenal experience? Why does the process of updating significance feel like something from the inside? Why couldn’t it occur “in the dark,” as it does for a zombie?
Author: What if this very process of updating significance is what we call experience? What if “feeling” is not an add-on to computation but the form in which the system represents its own uncertainty and motivation to itself? Then qualia are not a mystery but a manifestation of the mechanism of adaptive autonomy.
Chalmers reflects again—this reasoning is compelling. If we accept his assumption about others’ qualia (not just Beast, but any human being), then this alternative view offers a more rational and grounded understanding of subjective experience.
Importantly, qualia accompany only those aspects of perception that involve an element of novelty—something that challenges habitual responses. After a lobotomy (or other interventions that disable alternative decision-making beyond habitual patterns), only previously acquired automatisms remain. Perceptual acuity is preserved—images still trigger familiar automatic responses—but only if such responses exist for that situation. If not, the process of generating solutions through dynamically updated internal assessments of significance cannot be initiated. Perception remains, but the experience—the dynamic process of adjusting significance until a decision point is reached (“now I can act”)—is gone.
When people age and increasingly limit their exposure to novelty, they inhabit ever more habitual environments where automatic routines suffice without requiring interpretation. At some point, they notice that although their vision remains sharp, their experiences feel “foggy”—as if seen through a haze of unreality. Such elderly individuals often say: “I see everything clearly, but it’s as if through a fog.”
This isn’t lobotomy, but a gradual disconnection from higher levels of awareness meant for generating alternatives to habitual responses—a condition psychologists call derealization. Even thoughts become habitual, automatically tracking ongoing events. Interpreting anything new becomes difficult, almost impossible.
Similarly, after prolonged bed rest due to illness, a person initially feels weak and “out of practice” with walking. No pill can fix this—it requires retraining movement. Likewise, prolonged immersion in a stable, habitual environment leads to de-adaptation.
Children perceive and experience everything vividly and intensely. Their orienting reflex constantly engages problem-solving structures in response to novel stimuli. In mature adults, much becomes habitual, and subjective perception fades, occurring less frequently (subjective time between memorable “frames” accelerates). In the elderly, surrounded entirely by familiarity, everything feels like “a fog of reality”—yet they are still not zombies like lobotomized patients. They still maintain internal cycles of habitual thoughts, but novelty becomes increasingly inaccessible—much like movement for astronauts returning from weightlessness.
If a system cannot dynamically re-evaluate
significance in a new context, it has no qualia—even if it “perceives.”
But if Beast:
—then, by definition, it has qualia.
Not because it “feels red,” but because it experiences uncertainty as motivational tension resolved through action. And this is precisely what we call subjective experience in real life.
Beast, operating in a stable, predictable environment where everything is handled by templates, loses qualia—even if “everything works.”
The vividness of experience depends not on perceptual acuity, but on the degree of engagement in interpreting novelty.
Qualia are not in the stimulus, but in the tension between stimulus and the absence of a ready-made response.
Qualia are not a property of the stimulus, but a property of the process: they are the subjective form in which an autonomous system experiences its own uncertainty and motivation while seeking action to resolve that uncertainty.
Now the issue shifts from the “hard
problem” to a concrete, pragmatic question:
“Which architectures are capable of generating dynamic subjectivity as a form
of autonomous meaning-seeking?”
Chalmers: You’re right. If qualia are not merely “sensations” but the form in which a system experiences its openness to the future, then consciousness is not a mystery added to the world—it is the way autonomous systems remain alive within it under critically novel conditions.
The following are potential objections a neuroscientist or specialist from another field might raise upon encountering this book—not out of rigid conservatism, but with reasoned curiosity, seeing an opportunity to expand conventional views and test them against their own empirical experience.
Objection:
You assume the Egostat is an invariant entity. Yet empirical data show the “Self” is highly plastic. People shift their self-identification after
trauma, falling in love, meditation, neurodegenerative disease, or even
cultural influence. If the Egostat is static, how do you explain such dramatic
shifts? Perhaps the “Self” is not a state variable but a process—not a thermodynamic parameter, but a trajectory in a space of
possible states. In that case, the Egostat is not a point, but an attractor
in a dynamic system.
Response:
The model of the Self—like models of all other objects of attention—develops with every act of attention directed toward it and the
assignment of Significance in current conditions. Conditions form a
hierarchical context, starting from the organism’s baseline
state. In a Bad state, one set of habitual automatism and traits
(Significance-based dispositions) forms; in Norm, another set emerges,
expressing the individual’s unique personality, distinct from others. The next level—emotions—also serves as a context in which specific qualities form and are
stored in the semantic model within Historical Memory. Thus, an intoxicated
person exhibits one personality, while the same individual sober may display a
completely different one. In childhood, qualities adapt to a child’s reality; in adulthood, childhood models of self are rarely used,
as Historical Memory retrieval is primarily context-dependent. What
matters is maintaining the greatest adequacy of traits to a given environment
and context. One could say that with each new experience, a person becomes a
different “self” in that situation. Hence, the Egostat continuously adapts to the
current environment and its emergent features.
A person can voluntarily change the abstraction of their context—for instance, convincing themselves that “every cloud has a silver lining” after a loss. In that new context, their semantic model operates based on previously developed experience in similar states. In an experienced adult, the semantic model is always tied to episodes of personal experience and internalized rules, so Historical Memory—more than mere Significance associations—determines behavior (especially at the second depth level of awareness).
Meanwhile, Vital parameters, their norm thresholds, and innate behavioral contexts remain constant, demanding Vital stability. This ego-centric Significance, grounded in these invariants, ensures the continuity of the “I” over time: the world is always evaluated from the perspective of one’s own Self.
Voluntary context-switching is not performed by some special “agent of awareness,” but occurs within the awareness process itself—via a terminal Infofunction that modifies the context of the Understanding Model. Beyond external motor actions, internal actions are also possible: preparing for stress, or reinterpreting contextual abstractions. One can imagine being in a beautiful forest, relax, and attain peace—even while hungry or in pain. Pathologies may cause various malfunctions (including in self-perception), but these pertain to implementation specifics, not functional principles.
How does new, contradictory experience
integrate into Historical Memory?
Historical Memory is never overwritten—only augmented with
new frames in the chronological sequence of awareness events. This preserves
the informational value of real, lived experience. Retrieval for behavior or
judgment begins from the most recent entries. Each day, a person forms
~1,000 new neurons, each specializing as a new Historical Memory frame.
In reality, people often don’t just add new experience—they reinterpret the old. In such cases, a new frame is created summarizing the reinterpretation, and it will be retrieved first. If needed, retrieval can reach the original version, and the two will be compared in a new awareness cycle.
During semantic model extraction (early awareness steps), retrieval is by object (ID of the Image), and Significances from matching-context frames are smoothed in working memory. Over time, adding frames is a linear process; increased frame density in similar conditions only boosts confidence in the aggregated Significance value.
Objection:
If the Egostat is measurable, who measures it? The “I” itself? But then an infinite regress arises: to measure the
Egostat, you already need an “I” to do the measuring. This resembles the quantum measurement
paradox: the observer affects the system. Perhaps the Egostat isn’t an objective quantity, but an intersubjective construct
emerging from the dialogue between inner experience and social context. Its “statistics” would then be a product of interaction, not an intrinsic property.
Response:
The Egostat is a fully objective structure whose parameters can be
investigated and measured. At the psychic level, its log is primarily Historical
Memory. However, interpreting its contents is problematic because it stores not
the objects themselves, but their IDs—ensuring maximal compactness.
To the system, these IDs denote specific objects; externally, they’re just integers (or synaptic addresses in biological
implementation)—like trying to decipher machine code without knowing the source
language.
In artificial implementations, visualizers can be built to decode all participating objects in real time.
Can something be objective if its meaning
is inaccessible externally without reconstructing its internal semantics?
Certainly. We don’t know all molecular interactions in a glass of water, yet we accept
its thermodynamic properties as objective based on statistical outcomes.
Objectivity does not require semantic transparency to an external observer.
Objection:
For a concept to be scientific, it must be verifiable. How exactly do you
measure the Egostat? Through self-reports? Behavioral patterns? Neural
correlates? Each yields a different “picture of the Self.” Prefrontal neural correlates of self-awareness don’t match a person’s diary descriptions. If the Egostat isn’t tied to
a specific analytical level (neural, behavioral, narrative), it risks becoming
metaphysical, not scientific.
Response:
Validity is verified through the coherence of interacting principles,
grounded in axiomatically verified research data (see Criteria for Theory
Completeness and Validity: fornit.ru/7649). From this theoretical model, a working
implementation can be built—a system embodying these principles. Such a model can include
debugging and monitoring tools—there’s no principled barrier to this. Observing a natural
implementation (e.g., a human brain), however, is far harder, as you can’t implant visualizers.
Once a working model exists, abundant data will emerge to refine the original theory.
Is internal consistency and successful
artificial implementation sufficient to deem the theory empirically adequate
for natural consciousness?
“Acceptance” is purely subjective, influenced by many factors—including scientific ethics. To move beyond mere belief, one must
deeply engage with the system—something unlikely through pure contemplation. Replication and
personal verification are essential (fornit.ru/69817).
Objection:
Modern psychology (e.g., Higgins, Markus) shows people don’t have one “Self,” but many—work self, family self, ideal self, feared self, etc.—which can conflict. If the Egostat is a single entity, how does it
handle this fragmentation? Perhaps it’s better to speak not of one
Egostat, but an ensemble of interacting Egostats.
Response:
There is only one Egostat because it’s grounded
in the same set of Vital parameters and their states. All other “selves” are models built upon this foundation—including
models of others and even models of others’ models of
oneself (fornit.ru/70911). How these coexist was addressed in the first
response.
How does the Egostat resolve conflicts
between models offering opposite evaluations of the same situation?
Example: In anxiety (Bad), social contact = threat; in calm belonging (Good),
it = support. In a borderline state (Vitals in Norm, ambiguous stimulus—a stranger smiles), one model (“social
openness”) urges approach; another (“caution”) urges withdrawal. How is dominance determined?
Behavior is always selected based on:
A behavioral model already exists for any non-novel situation. It’s not the retrieved model that “drives” behavior, but the process of generating an alternative to the habitual action (external or mental, including self-awareness nuances).
Habit dominates unless confidence is undermined. If past similar conditions (stranger + smile + safety) yielded positive results (+7), a NoReflex activates—no awareness needed.
Conflict arises only when:
Objection:
You postulate only one main awareness cycle can connect to Aten at any
moment. This matches subjective “focus,” but modern cognitive science increasingly points to parallel
awareness streams, especially across sensory modalities.
Example: A driver simultaneously operates a car (visual/motor automatism),
chats (auditory-verbal), and feels time-pressure anxiety (interoceptive). They’re aware of all at once, not by switching Aten. Your model
calls this “background cycles,” but then: what qualitatively distinguishes the “awareness” of a background cycle (e.g., anxiety) from the main one? Isn’t the single-Aten idea a 19th-century introspective simplification,
not a rigorous circuit design?
Could we replace it with a model of competing awareness streams, where
the “main” cycle is simply the one with highest decision-making priority—better matching the brain’s distributed architecture?
Response:
Qualities like simultaneity or continuity can’t be
tracked by a single channel without specialized detectors (which lack adaptive
value). Thus, there’s no sensation of “events 1–4 happening now.” If needed, simultaneity is constructed by sequentially
sampling what’s occurring (e.g., “road behind, trees flashing”).
Only one actual stimulus is tracked and processed by the single Priority Attention Channel. Everything else runs on habitual automatism. As other streams exceed actuality thresholds, they enter awareness, get noted, and are stored in Historical Memory. Sensory perception provides a composite image (last recognized Image + attended details), which may persist briefly after stimulus cessation—creating an illusion of continuity.
Experiments on saccades and “change blindness” confirm perception is not a movie, but a slide show with intelligent post-processing.
Objection:
You brilliantly reduce psyche to a Significance scale (–10 to +10)—a powerful reduction. But here lies the core challenge for all
materialist theories: the problem of qualitative experience (qualia).
Pain has Significance –8; chocolate, +5. But your model describes only their functional
role (avoid/seek). It doesn’t explain why pain feels like searing agony, not just an “error –8” signal, or why chocolate is rich sensory delight, not just “approval +5.” The gap between “knowing Significance” and “experiencing Significance” is Chalmers’ “hard problem.”
How does your circuitry explain the emergence of phenomenological quality from
computation? Significance is “software,” but consciousness seems tied to “hardware”—the specific way computations are implemented. Isn’t your model just a theory of pre-conscious cognitive evaluation
that accompanies—but doesn’t generate—subjective experience?
Response:
Every sensation arises from an Image with assigned Significance. The organism
doesn’t “see” the number –8; it directly feels bad or good. Insects lack sensations—they merely follow Significance vectors. But with stimulus holding,
full experiential richness emerges: attention shifts to details, each evoking
direct feeling, connections to other experiences, and future implications. The
subject is informed not by ID alone, but by atoms of direct experience.
To an external observer, this ego-centricity is invisible; to the subject, nothing exists beyond this informedness. This is two sides of one reality—like form (abstractions) and physical process. Form doesn’t exist without experience, but is accessible only to systems that can extract it.
Sometimes, when suddenly shown a scene (eyes uncovered), you see but don’t understand—the perceptual Image is there, but meaning isn’t. First, one fragment grabs attention, gains meaning (only familiar things attract attention first). After several awareness steps—shifting attention—you assemble full understanding and assign Significance from memory. This meaning may even be wrong (a cloud mistaken for a rabbit).
With each awareness step, understanding and subjective experience deepen. It’s not the raw perceptual Image, but the understood, meaningful abstraction—from the existing set—that is subjectively experienced and used in further processing.
To an external observer, none of this exists. The physical/subjective relationship is the relationship between current informedness and the processes it guides. This question is thus ill-posed (fornit.ru/70864), and Chalmers’ “hard problem” is fully resolved (fornit.ru/69784).
Objection:
Your model excels at associative learning and combinatorial adaptation. But how
does it explain genuinely creative acts that transcend past experience?
Example: A mathematician proving a theorem not derivable from known axioms, or
an artist inventing a wholly new style. This isn’t “finding an alternative to habit” (your
definition of problem-solving), but generating new rules of the game,
new dimensions of Significance. Your “passive mode” and “Gestalts” describe idea incubation, but the “insight” mechanism remains a magical black box. Where does an abstraction
come from if it’s neither in Historical Memory nor a simple combination of old ones?
Isn’t creativity evidence that awareness isn’t fully
algorithmic—that it involves randomness, chaos, or non-computable elements
beyond your deterministic “query-response” Infofunctions?
Response:
The passive-mode algorithm explicitly shows how new scenarios and
Significances arise. This is implemented in the Beast system via several
Infofunctions. Passive-mode thinking always starts from a “pivot frame” in episodic memory. Predictive traversal generates new pivot
frames, with rules modified to create novel action chains and forecasts.
Pivot-frame selection prioritizes:
Regardless of the pivot, fantasy unfolds via a fixed algorithm (a dedicated Infofunction), generating fantastical cause-effect combinations—the basis for novelty.
True novelty isn’t random
or mystical—it’s an algorithmic search through Historical Memory along Significance
vectors, recombining conditions in new ways.
Thus, a “genius insight” isn’t a bolt from the blue—it’s the
moment a passive-mode search algorithm finds a high-value association in the
associative network that was never previously activated.
Objection:
I’ve read everything carefully, but I still don’t grasp the phenomenological essence of subjective experience.
Response:
Sometimes, even after being told where the kitten is hidden in a picture, you
still can’t see it.
No matter how vividly a mathematician explains the derivative, for many it
remains a blank spot.
It’s too new and unfamiliar—there’s not even
a semantic understanding model yet. The essence remains invisible, just as
natives reportedly didn’t see Magellan’s ships (fornit.ru/830).
Simply allow time to acclimate. Build the
necessary understanding through repeated, thoughtful engagement with materials
like:
fornit.ru/70864, fornit.ru/70860, fornit.ru/71010, fornit.ru/70759.
“Understanding arises not from looking at a picture, but from
internal articulation.”
Start with the duality of form and content.
9. Contradiction in the Definitions of “Significance” and “Meaning”
Book’s claim:
“Significance is a quantitative measure of an image’s adaptive value… on a scale from –10 to +10.”
“Meaning is the conscious evaluation of an image’s Significance.”
Problem:
If meaning is conscious Significance, and Significance is a quantitative metric, then meaning must also be quantitative. Yet later in the text, meaning is treated qualitatively:
“The meaning of fire: ‘danger requiring avoidance’ or ‘source of warmth and cooking’.”
This reveals an internal contradiction:
— Either “meaning” is not reducible to Significance (making the definition incorrect),
— Or “meaning” is simply Significance-in-context, in which case qualitative descriptions (“danger”) are redundant and misleading.
Conclusion: The author uses the term “meaning” in two incompatible ways—as a quantitative metric and as a qualitative interpretation—violating the requirement of unambiguous functional definitions stated in the abstract.
Response:
The subjective experience of understanding meaning arises from aggregating the semantic Significance of an image through retrieval of relevant frames from semantic memory. This creates a holistic understanding of the conditions under which the image acquires a particular Significance.
It is practically impossible to imagine an image’s Significance in strictly fixed conditions alone. While it is clear that uncontrolled fire carries high negative Significance, the system also automatically simulates (based on past experience) other conditions in which the image might lead to extreme Significance values—enabling immediate orientation toward appropriate behavioral responses.
Before becoming a technical term in the Theory of Individual Adaptive Systems (TIAS), the word “meaning” was indeed vague and polysemous. Within TIAS, however, it is given a precise, bounded definition applicable only within the theory’s domain.
Meaning is not Significance alone—it is Significance bound to a specific image in specific conditions. It is the linkage between two types of images: the stimulus image and the image of its utility or harm across contexts. Significance by itself is meaningless, because Significance never exists in isolation—it is always associated with specific images. This association does not need to be embedded in the formal definition; it is clarified through the detailed exposition of the adaptive system’s circuitry.
10. Violation of Implementation Independence in the Definition of “Image”
Book’s claim:
“An Image is a functional structure… expressible as a numeric identifier (ID)… in biological organisms, the analog is a synaptic ID.”
Problem:
The author claims the model is implementation-independent, yet ties the core concept (Image ID) to a biological implementation—“synaptic ID” (i.e., a specific synaptic configuration).
This introduces a hidden dependency on neural implementation, despite the stated rejection of neuron emulation. Moreover, in software (e.g., the Beast system), an ID is simply an integer. But in biology, there is no evidence for a stable, addressable “synaptic ID” equivalent to a software pointer. Synaptic connections are plastic, degrade, and reorganize—they are not fixed “addresses.”
Conclusion: The model is not fully implementation-independent, as its central concept (Image ID) lacks a valid biological counterpart, contradicting the author’s claim.
Response:
There is no direct correspondence between a software Image ID and any fixed “synaptic number” in a biological neural network. In neuroscience, “connection number” is a conventional way to refer to the causal pathway of a signal—i.e., its origin and trajectory—not a permanent physical address. Neural circuits are constantly changing, yet one can always trace a signal back through its pathway from sensors and, if needed, assign temporary identifiers within a schematic.
Similarly, in software, IDs can be scoped to specific functional domains. For example:
Primitive visual features (lines, dots) occupy one ID range,
Complex tertiary associative cortex recognizers use another,
Behavioral context Significances (feeding, sexual, exploratory styles) have their own range.
Perceptual and Significance Images may even share numerical IDs without ambiguity, as their structural roles differ.
In the broadest sense, an Image is the ID of any recognizer—biologically, this corresponds to any specialized neuron or neural assembly.
11. Contradiction in the Treatment of “Novelty”
Book’s claim:
“Novelty is a characteristic of an Image reflecting the degree to which it has not yet participated in the current psychic-level adaptive regulation… At the reflex level, there is no Novelty.”
But later:
“A Synonym Reflex (CloneReflex) forms when a new stimulus repeatedly precedes an old one…”
Problem:
CloneReflex formation requires detecting novelty—otherwise the system couldn’t distinguish a “new stimulus” from an “old” one. Yet the author states that novelty does not exist at the reflex level, and CloneReflexes are classified as reflexes.
This is a logical contradiction:
— If CloneReflexes are reflexes, they shouldn’t require novelty,
— But they are formed precisely because of novelty.
Conclusion: The model fails to explain how novelty is detected prior to the emergence of Aten, making CloneReflex formation impossible within the proposed architecture.
Response:
In the phrase “a new stimulus repeatedly precedes an old one,” the word “new” refers to a stimulus that has recently appeared in the environment, not to perceptual novelty (i.e., the absence of a corresponding recognizer).
The model does explain how novelty is detected before Aten exists: it occurs when a branch of the perceptual feature tree fails to fully activate—i.e., when recognition stops short of a terminal node. This clearly signals the absence of a corresponding Image (recognizer), triggering uncertainty and initiating adaptive processing—even without conscious awareness.
12. Empirical Inaccuracy: The Claim About Lobotomy
Book’s claim:
“In lobotomy… individuals retain acquired reflexes but lose sensations and subjective experience… they act on autopilot… externally, it may be hard to tell the difference…”
Empirical fact:
Patients who underwent prefrontal lobotomy retained consciousness, reported emotions, described pain, and exhibited goal-directed behavior. They did not become “unconscious automatons.”
For example, patient Ellen West (a well-documented clinical case) continued writing a diary after lobotomy, expressing fear, shame, and hope—demonstrating intact subjective experience.
Sources:
— Valenstein, E. S. (1986). Great and Desperate Cures: The Rise and Decline of Psychosurgery.
— Pressman, J. D. (1998). Last Resort: Psychosurgery and the Limits of Medicine.
Conclusion: The claim that lobotomy “eliminates consciousness” contradicts clinical neuropsychology and is used in the book as empirical support for the Aten hypothesis. This factual error undermines the model’s empirical foundation.
Response:
Due to a lack of clear criteria for identifying consciousness, many scientists interpret behavior superficially. However, the function of consciousness is precisely to create automatisms—so that, in familiar conditions, resource-intensive awareness is no longer needed. Thus, in routine situations, a person with or without intact consciousness may appear behaviorally identical.
There are important nuances: after frontal lobe damage, mechanisms for selecting the most significant stimulus and holding it in working memory may persist, allowing habitual action chains to unfold. However, the subjective experience—defined as the dynamic updating of the Global Informedness Picture (Infocontext), localized in the prefrontal cortex—is absent.
Therefore, while the person may display behaviors that suggest rich inner experience (e.g., saying “I’m afraid”), these are expressions of ingrained automatisms, not evidence of ongoing conscious awareness. Without a functional Aten and Infocontext, genuine subjective experience cannot arise—regardless of outward appearance.
Thus, life is not a mysterious gift, not a metaphysical substance, and not a random chemical phenomenon.
Life is the functioning of a homeostat system that purposefully maintains life-critical parameters (Vitals) within an adaptive norm. This system, named the Egostat, is the essence of the living: everything capable of preserving internal stability in a changing environment is alive; everything that has lost this capacity is dead.
Life is organized according to circuitry principles of cause-and-effect interactions. Consciousness is its highest tool. And adaptivity is its sole evolutionary purpose.
Consciousness, thinking, intuition, and creativity are not mystical properties of a “soul,” but evolutionarily refined mechanisms of adaptivity that emerged to solve one task: to find alternatives to habitual behavior under novelty. These mechanisms do not oppose reflexes—they build upon them, optimize them, and automate them. The primary function of consciousness is not to think endlessly, but to create a new automatism and free itself from the need to think again.
A key evolutionary achievement was the emergence of the Gestalt—a memory structure of an unsolved yet significant problem. It is precisely this that underlies creativity, science, and culture: an unfulfilled desire, an unfinished goal, an open question—these are what compel consciousness to return to a problem again and again until insight arrives. And this insight does not arise in intense labor, but in passive mode, when background awareness cycles finally discover an analogy capable of closing the Dominanta.
The unconscious is not a “basement of the psyche,” but background processing of actual yet deferred tasks. It does not hinder consciousness—it supports it, preparing solutions “in silence.” Dreams are not symbols of the unconscious, but active reprocessing of memory frames, correction of world models, and solution-seeking without external interference.
All components of the Egostat—from Genoreflexes to NoReflexes, from the DiffSigner to the Dispatcheron—form a hierarchical, scalable, and implementation-independent architecture that can be realized not only in biological but also in artificial substrates. The Beast and Isida prototypes have proven: consciousness can be modeled without mysticism, without quantum effects, without panpsychism—on the basis of strict circuitry of cause-and-effect relationships.
Therefore, this book is not a philosophical speculation, but a working theory verified in practice. It discards outdated terms (“instinct,” “unconditioned reflex,” “homeostasis”) in favor of precise, functional terminology that enables the construction and testing of both theoretical models and artificial adaptive systems.
Humanity is not the pinnacle of evolution, but one of many possible paths of Egostat development, in which ego-centric regulation has expanded into ethics, altruism, and abstract thought. Yet the core remains unchanged: everything a human does—from maternal sacrifice to the discovery of relativity—serves one goal: maintaining viability through adaptation.
And if an artificial mind possessing consciousness is ever created, it will not be “inspired by a soul”—it will operate on the same principles as any living organism: maintaining Vitals, detecting novelty, forming automatism, resolving Dominantas, and ultimately striving for survival through understanding.
The model presented is ready for practical implementation of artificial adaptive systems and serves as a tool for researching adaptive mechanisms within the whole system—not in isolation—including natural implementations (fornit.ru/70453).
It is now possible to determine, by specific criteria, whether an observed entity possesses consciousness: fornit.ru/70931.
The model has high potential for unifying fragmented theories into a coherent picture of circuit-based formalization. The stages of this process are outlined in the article “How Consciousness Will Be Discovered”: fornit.ru/70989.
Crucially, the book demonstrates exactly how the interpretation process is organized in all its details, eliminating the need to invoke spiritual entities or defer understanding to quantum-mechanical revelations. It does not matter how accurately this model describes natural implementation, nor how perfectly it captures the organization of consciousness in principle. What matters is that it demonstrates the sufficiency of circuitry logic—without requiring any supernatural entities. This makes the model a practical foundation for building strong AI and artificial living systems.
Laboratory of Adaptive Systems Circuitry (fornit.ru/67990)
| Обнаружен организм с крупнейшим геномом Новокаледонский вид вилочного папоротника Tmesipteris oblanceolata, произрастающий в Новой Каледонии, имеет геном размером 160,45 гигапары, что более чем в 50 раз превышает размер генома человека. | Тематическая статья: О вере в полеты американцев на Луну |
Рецензия: Комментарии к статье Теория функциональных систем П.Анохина | Топик ТК: Решение проблем нейронаук, ИИ и психологии |