x
Basic Orientation
Book1: R-E Living & "Homo Rationalis"
Book2: Mind-Body Problem
(Back)
Objective Model: Measurement
Modeling Material
Subjective & Objective Models, Reality
The Concept of Sub-Models
The Physical & Mental Sub-Models
The Mental Model
Physical & Mental Model Languages
Physico-Mental Model
The Concept of "Mind"
Free Will
(More)
Book3: Humanianity
Introduction: Humanianity 2020
Philosophico-Religious Issues
Psycho-Socio-Cultural Issues
The Twelve Articles
Relevant Autobiography
 

"HOMO RATIONALIS" AND HUMANIANITY

 
HELPING TO PROMOTE OUR THIRD EXPONENTIAL CHANGE
 

THE CONCEPT OF SUB-MODELS



So far, we have been talking about the "Subjective Model" and the "Objective Model" as if these were two separate things and perhaps the only two alternatives as far as our modeling of "Reality" is concerned. And in one sense this would be true. We can indeed think of all those beliefs that develop from moment-by-moment subjective experience and that guide our behavior from moment to moment, and contrast that with all those beliefs that we have acquired instead primarily from others through the use of language and other sets of symbols, and that sometimes modify (often in very important ways) what we imagine and what we do.


If we think about the development of either the Subjective Model or the Objective Model, we will realize that the history of the development of those Models was not the orderly development of increasingly complex beliefs based upon the very simplest and most basic beliefs possible, as would be attempted in a geometry textbook. Instead, we can imagine that the development of each of these Models (Subjective and Objective) is similar to the development of many individual "lumps" within a fluid, lumps that ultimately will coalesce, such that the liquid presumably ultimately will become completely solid. According to this metaphor (model), each of those lumps is a model (or sub-model), useful for the purposes at that time, and it is only with the passage of further time that it perhaps becomes evident that two or more of those models are specific examples of a more general model, that represents an "underlying truth" of each of the more specific models. (This would be the coalescence of lumps in our metaphor.)


But since we know full well that models, or beliefs, can be inaccurate, or "wrong," it may come to pass that two lumps will not be able to coalesce, because they are logically incompatible in some way, such that the rules of logic (applied to linguistic models of those beliefs) will not allow both of them to be considered correct (because of resulting in opposite potential predictions).


So to a great extent we have lots of little "belief systems" about various seemingly unrelated topics. Within each of those little belief systems, if the beliefs are expressed in words, they will most likely be found to be non-contradictory according to the rules of logic. The beliefs will be logically consistent with each other, and indeed "hang together" like "systems." But it is also possible that if we take a close look at two "little belief systems" ("sub-models") within the total collection of belief systems, we will find two beliefs (one in each little belief system) that, if modeled linguistically ("put into words"), will be found according to the rules of logic to be contradictory to each other. And certainly we know that in the Objective Model (that is the "property" of our species in general) there is much disagreement, and thus there are contradictory sub-models within that Objective Model. So when we refer to the Objective Model, we must always remember that we are not referring to an internally consistent belief system, but instead to a collection of belief systems, some of which may be contradictory to each other.


We have also seen that the most important feature of a model is that it "works," that is, produces specific predictions that turn out to be, or would turn out to be, what actually happens or would happen (given certain specified circumstances). That most important feature is essentially the defining characteristic of a model. And we have seen that some models are easier to work with in some situations than are others. For almost all of our daily living, we do not need to go beyond Newton's laws to accomplish things satisfactorily. But to get our astronauts back from the moon, we need, as I understand it, to use the more precise, accurate, and comprehensive model of relativity. Similarly, there are some highly specialized fields that require using quantum mechanics, a model that is quite "counterintuitive," and usually very unnecessary. Furthermore, we do not even need to know anything at all about Newton's laws to get up and go into the kitchen, or even to get to the convenience store.


So we have been talking about how two models (or sub-models) can be incompatible with each other in that they are inconsistent with each other logically, because of contradiction according to the rules of logic when applied to the linguistic models of those two models, even though they may have some usefulness in certain (usually different) situations.


But there is another kind of incompatibility that involves the use of different "materials" for that modeling. Under such circumstances, each of the models may be quite satisfactory for the intended use, but they cannot be combined into one modeling process.


There are many simple examples of this kind of difficulty. Very obviously, a plastic model of one half of a car cannot be combined with a picture or diagram of the other half of the car to produce an effective model of the car. Another example would be the incompatibility between a holographic image and a sculpture (both combined to make one object). In the same way, a linguistic model cannot be combined with a diagram. And another example would be two descriptions rendered in different languages. They can both be used, but not as one model.


It may (or may not) be possible that the information provided by one model can be transferred to another model by a translational process, but the two models cannot be considered just two components of one model. That translational process actually involves modeling at a "higher level," involving the development of a third model, a model, for instance in the case of two languages, that allows one to use sentences in one language to predict what people using a different language would say in the modeling of a belief, conveying a request, etc.


More subtly, there may be two linguistic models (in the same language) using the same words, but with the words having different meanings in the two models. The use of both models as parts of the same model would involve the construction of a third linguistic model that allowed for accurate translation of the words as used in one of the models into words that would mean the same thing in the alternative model.


So what we are considering here are certain problems that may arise from the attempt to merge sub-models into one model, when the same words in the two submodels may mean different things, this being a problem with the "material" with which we construct our models, words (or language) being a component of subjective experience, which is, as we have noted, the only material we have to work with.


As we have studied, during the development of the Objective Model, what we might call "the nature of Reality," we have done so in many different ways, related to many different problems that we have had to solve and many different decisions that we have had to make. That is why we have "fields of study." Each of those fields of study may use models specifically created for that particular field of study.


We know that the Objective Model has its origins in linguistic modeling and that therefore it is highly dependent upon language. And for any objective linguistic model to be effective, the words that it uses must have the same meaning, by agreement, to everyone using the model. This is fairly obvious in the sciences, where each individual science has a lexicon, or agreed-upon terminology, that it uses to deal with the subject matter that that science is about. So we can say that it is likely that any objective model of significant complexity and usefulness will more than likely have its own language, so to speak. It is not that all of its words will be used only within that model, but that there will be specific words that have specific meaning in that model such that the model is coherent and effective and teachable.


Now these words that are specific to a particular model may happen to be used only within that model and not have additional meanings that are useful in other models, or they may indeed also be used in other models with different meanings. So confusion can occur because of words having different meanings, depending on what model they are being used in. Often confusion is avoided because people can tell what the model is that is being used. They would be likely to call that judgment "understanding the meaning of the word because of understanding the context in which it is being used."


But it can also happen that people may not recognize the fact that the word is being used with a different meaning, dependent upon the model, or context, in which it is being used. And it will turn out, I believe, that this is one of the factors causing the "mind-body problem" and the "free will vs. determinism problem."


So we have seen that both the Subjective Model and the Objective Model, as the terms are being used in this presentation, are actually sets of models (and therefore capitalized), and that as those individual models within each of the two Models have developed and have tended to merge with one another, at times imperfectly, some models may be considered to be sub-models within other, more general, models. And these sub-models may be somewhat incompatible by virtue of logical contradiction if linguistically modeled, this fact being consistent with the idea that this modeling process is an imperfect but probably gradually improving process. And we have seen that the imperfections of language (linguistic modeling) may indeed complicate and make more difficult to understand some of those contradictions.


So we have the situation that the same brain may have many, many models (patterns of enhanced neuronal synaptic networks) that are partially independent of each other, and either consistent with or inconsistent with each other, but useful to a greater or lesser extent, depending upon the situation. In some situations one model may become active, and in other situations another model may become active. And if what we predict will happen is what happens, e.g., if no mistake is made, then the model has worked, whether or not it is inconsistent with or contradictory to some other model that also sometimes works.


So now we need to look at some ways we have of thinking about (modeling) Reality that involve different methods of modeling, for instance, using different materials or different starting points (assumptions), with an awareness of the possibility of incompatibility of the models developed by those different methods, despite their usefulness in different kinds of situations.