The brain of ants

The integration of all these methods is forming into a single bottom up + top down and meet in the middle approach and this perhaps why it seems so odd in the way I develop a system. I must deal with each aspect of the lowest level of the system and determine how it affects the whole. I can define the goals from the top down perspective and approach the overall process from two directions. The possibilities must be adjusted for the reality of what is achievable with the available resources.

For each part of the system that must be integrated I need to determine how this affects all other parts of the system. I started with a basic model of how this was to achieved and it seems that it will reliably produce all of the effects that I intended.

The basic method is this, from a programming perspective. Structures are defined for every thing in the system and lists are created that order these objects and their association to methods. Methods are all the same and objects are passed as a single pointer to the object structure that is to be manipulated. It allows me to select functions ( methods ) from a list and execute the functions in order from another list. Since each function input is a single object pointer, the interface is cleaner and it does not require a person to understand what each passed element is intended to do. Since methods are associated with specific object types, the act of programming is less error prone, as it is not possible to assign a method that will corrupt or over-run a data structure.

Errors are handled at the lowest level of operation and the inherent ability of "C" arrays to be extended beyond their physicality is specifically addressed.

In order to operate the entire system as self programming, these factors have to be instantiated in the structure itself. I is self creating and in the way that UML can model UML or XML can describe XML, or a document can describe itself, the program assembles itself and each step of the process is designed to remove the programmer from the role of typing each individual element of a process and applying the creative talent at the highest levels of its operation to determine if it has failed to be emergent in the process of its development.

An example of this is the fact that I incorporate a pair of programs that compile each other in sequence. The process is then measured by the program itself to see if it has passed the compile without errors and then it proceeds to send the program to a machine under the control of the first machine. The second machine is measured and if it performs as expected, then it is considered a success and a version step ensues with all associated data. This would be a diff and patch, tarring the original, creating a step number, and then proceeding to the next stage of altering the program toward goals. By identifying goals and methods that match that goal, the program can order a sequence that eventually leads to a connection between existing states and desired consequence.

As such the program can be left to attempt to reach a specific goal without intervention and it can proceed at the fastest speed that each of these elements can be applied. In the case of a secondary machine, It is supplied with my OS, and is reset after each test pass to start with a known state. I use my Operating System, because it was designed to have no unknown states. I do not use interrupts or FPU or GPU. A biological or other device that I might create would not have these facilities available either. It can however communicate to a third machine which has all of these facilities. In this way, I can guarantee that the DUT ( Device Under Test ) is not subject to a condition which is unpredictable from the processor standpoint.

The inclusion of interrupts asynchronously creates a complexity in determinant form that requires me to look at every instruction and consider if it could be changed by the intervening INT or DMA or VRTC or FPU fault. It may seem a fantastic level of minutia to encompass, however my training is from the gate level to assembly to complex language and I do not find it burdensome to deal with every possible state of a program as if it were a FPGA. This is quite often necessary when a person is required to develop ROMMABLE or firmware style methods. Since a flaw will make the machine permanently unusable ( firmware is now flashable, but it requires intervention ), it requires a certain attention to detail that is not required in programs that can simply be recompiled to deal with a flaw.

When the same methods are created in a self replicating ( or printed 3D ) neural array, the process can proceed from its origins to a complete system again. The test is if it can self program a complex interactive network application with only the goal as a given. If that can be achieved , then I can proceed to apply that to a physical device which not only programs itself, but determines which circuit arrangements are to be made to result in the fastest and most reliable outcome.

The entire process uses methods from LISP, OOP, XML, structured programming and many of the automated processes that exist such as autoconfigure, autodoc, make, svn, git, shell scripting, and the vast array of programming libraries and languages like Python and Perl.

I haven't even discussed genotype and phenotype here and that is just part of how models ( objects and methods ) are handled. That happens at a much higher level than the underlying process that supports the stable expansion.

0 comments:

Contributors

Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen