Linux kernel understanding

The Ottawa LUG ( Linux Users Group ) created several videos that go into great detail about how the kernel is designed, structured, maintained, and used. It is reasonably informative and isn't too technical that it bogs down. The subjects are very complex and if you get side tracked into SCSI protocol, USB stacks, drivers and specifics of certain machine it can be years worth of information. These are deep enough to be informative and leave the technical detail to be known already or discovered, if needed.

One interesting thing I learned is that the kernel is "Object Flavored" in its implementation. C is not inherently OO ( Object Oriented ) and it is styled to be OO. This is exactly what I do in my code in a slightly different way as I define data sets that might be simple data types or complex structures or even sections of a structure to be "associated" with a method in context. The implementation in kernel and C++ is strictly single context association and I use a method that operates across data sets and then determines if a method is valid in a specific context, or excluded in a specific condition or context.

Another method I use is to funnel all calls through a single point. It distributes those system calls in categories and sections. The advantage of this is debugging , and I use a logging debugger which has a circular buffer to track the code. By placing my own break codes and 'int 3' in the code and the ability to control the level of debug it allows a much faster resolution of complex problems that may depend on what conditions transpired in parallel or previous to the fault. I also track each hardware interrupt on a conditional that is adjustable in the variables page.

Another advantage of centralizing system function calls is that I can essentially throttle the speed of the code to make it slower and slower as it approaches a point which is of interest. This is primarily for user space code that faults and then allows me to switch to debug or forces me to debug on a critical fault. It is not always necessary, however some faults are not immediately obvious, even to a trained programmer.

Another advantage of this debugger is that it will run code as symbolic. I can set up a virtual machine and pretend to do the operations in a symbolic context. I use variables like R10 as register 10 and simply have a distributor table that matches the machine code to a simulation of the code. The most difficult part of that is EA ( Effective Address ). If Virtual Effective Address ( Simulating a process running on a protected mode machine) is included it becomes almost mind numbingly complex. This also allows me to run code without a physical interface or debug virus like code in a safe way that produces list of what it did with which pseudo resources.

IBM also has much documentation on Linux and that is commendable. IBM is another bygone monopoly that has seen the light ( somewhat ). In their 'time of monopoly' they used their position to do some questionable things also.

Google has managed to stay away from the pitfalls of a monopoly and that is commendable. I think it is because they were aware that it is a company killer and it is not really good for people in general. I am not familiar with the founders, but I do hear the phrase "Do no evil." mentioned as a corporate policy.

ADDED: It is very interesting after going through the detail of how the Linux kernel initializes. If I were paranoid, I would think they had my code to use as a template. The truth is that there are not many ways that this can be done and be stable and proper. The order of events is determined by the hardware. Except for the fact that I used assembly right up to the point of the switch to protected mode, the code is the same. I can read the assembly and even the C as easily. Some of the features of GCC are new to me and some of the terminology is different, but the concepts have to be implemented a certain way. I do mutex ( mutual exclusion ) and deadly embrace avoidance, but I never called them semaphores. This highlights the fact that the type of activities that Ms has been involved in are completely inappropriate. Any person has to design the code around certain methods and by patenting or roadblocking a specific method that is required to achieve a goal is the same as poisoning everyone to benefit oneself.

This is one of those cases where BIOS experience, assembly(ATT & Intel), pmode structure, and hardware experience make this almost a walk in the park. I should have my own version of the kernel with a patch file in a couple days. The biggest gain that I see in that is the fact that I have a ram list based debugger that is dead solid and monumental in its capability. It works seamlessly in real and protected mode and that is very tricky. The absolute necessity is to never have a second fault in debug, as it is two strikes and you're out! -- reset ( Double Fault).

0 comments:

Contributors

Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen