Shadows in the corner of logic

Why are the shadows of bad logic so clear? The main reason is how you learn to program and what you learn on that path. The first digital logic was in the form of relays that symbolized the [ and, or, not, ] functions as contacts on the relays. Ladder logic Wikipedia. After that it was the first integrated circuit and analog transistor circuits. Logic Gates Wikipedia. Analog to Digital converter Wikipedia. Transistor circuit Wikipedia.

By using this logic and observing which elements are problematic, it is a matter of extension to realize that when designing a more complex system, that certain problems are always present in designs based on a simulation of digital logic. For instance a SR Flip Flop is the model of memory in one bit. It requires 2 bits to indicate carry. What becomes immediately clear is that the implementation, interpretation, and the reality of the world is ambiguous. The implementation of 'carry', 'sign', and 'borrow' depends on the relationships and how they are employed.

The first control circuits were a mixture of analog transistor circuits and digital like elements. It is possible to design a control circuit which operates as a program. When the initial designs are 4 bit registers and the register set is limited, certain common problems are obvious when applied. These problems never go away as the systems become more complex. They get moved further and further into the corner case. When considering a program it is good practice to start at the corner and work in to the problem. 'carry' overflow is one of those issues.

As systems become more complex and instruction sets begin to include more complicated algorithms it carries with it the same corner cases and brings some new ones. When a program is compiled from assembly language or hex code it is much easier to identify and avoid potential corner cases. The utility and complexity suffers from the time required to analyze and build each element. Safe methods can be devised, though many require the use of corner cases at some point.

The concept of computer control, digital logic, and all of its parts suffer from a basic initial paradigm issue. The system derives from math and logic and it assumes the integer one {1} at its core. What this does is create an infinity in the form of a singularity as the basis of all logic. It is no surprise that as a result the extension of the system leads to some results which are confounding. The idea of singularity and an object which is complete and separate from the universe is a concept and has no correlation in the universe.

The original systems of math were devised in pre-history and it could be assumed that it derived by simple scratches on a rock to indicate the correspondence between any set and the set of scratches.

When programming or designing a complex system it is not always possible to be certain that the full and complete instantiation will be possible with a particular configuration. It sometimes becomes necessary to start with another approach and completely abandon the design, while retaining the experience to guide the selection of new methods.

The foundation of math, logic and its counterpart in digital logic suffer from certain odd corner cases which derive from the initial assumption that unique singularity exist. In this context the word singularity is intended to mean one {1}.

A finished design can incorporate elements that are inherently flawed and still be useful as long as the person who uses that system is aware of those corner cases to be avoided in practice. The problem with computers is that this is never, or rarely the case. Few people are aware that every digital system suffers from the same ghost that haunts the corners between the model of reality and reality itself.

Mathematics is a useful system, but it suffers from the flaw of circular proof. First defining a system of logic and then proving that system is valid based on the use of that system is with out a doubt circular and as such is incomplete. The universe has the final say on what is real and what is not.

The concepts which arise in math and in computers are parallel because they derive from the same assumptions. A computer programmer would never claim that {-zero and +zero} were inherently real or even represent something unless it is simply associated that way in practice. In the same stroke, it could be said that the insistence that sqrt(-1) constitutes a real and physical correlation in the universe seems quite odd.

These prime assumptions have consequence. A system designed by application from human pre-history is not likely to be the best design, first time.

Personally, I find it easy to use mathematics and have no trouble dealing with manifolds, partial differentials, wave functions, Fourier analysis, matrix computation and many other complex methods, but like the foundation of computers, I realize that -0 isn't real and I consider that in practice. It is a method, a tool, and sometimes the tool is not right for every job. I suppose a hammer can be used to insert screws, but a better tool can be devised if you understand the underlying problem. When dealing with a system of interacting infinities it requires something a bit more elegant than a hammer.

The problem as I see it in analogy is this. It is assumed incorrectly, IMHO, that if one creates a puppet "Pinocchio" and you work very hard for a long time on the detail you will eventually have a little boy. The problem is that a being is infinite in construction and no matter how many coats of paint are applied at the molecular level, the puppet is still a puppet.


Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen