Forking Linux

#include <stdio.h> char pt[]="%d\n"; int main(int argc, char *argv[]){ __asm__ ("xor %r10,%r11\n" "movl $pt, %edi\n" "movl $0, %eax\n" "movl -4(%rbp), %esi\n" "call printf\n"); return 0;} /* printf(pt,argc); strange that I can read it either way. */

I have decided to fork Linux for my own benefit. I wanted to integrate some hardware that is non standard like CAM and FPGA linked to a CPU with x386+ chips. By designing it with my OS which is already solid, I can incorporate a POSIX interface and an extension interface to use the parallelism that is becoming more prevalent. Lack of parallel execution design is a major drawback for me as I can get much more compute power per $ now with a threaded OS core. I designed my software specifically for this eventuality many years ago and it is solid.

So antfarmgl will be an entire OS now. I will start with GCC compatibility and work up from there. The first need is a compiler and that shouldn't be too hard as I have all the interfaces already, I only need to establish a standard for the connection for memory allocation, disk access, process scheduling and such. I seems like something that will be fun also, as it lets me see where some of the gotchas are and perhaps contribute some suggestions that will further the utilization of multi-cores.

The incorporation of CAM as a system resources negates many techniques in data base, search, sort and information access. Properly implemented it can be variable power consumption and throttled like the temperature monitoring on a video card. It would operate in O(1) time always if fully cooled and as such would defeat any sort/hash/REGEX always. By throttling the interface I can set an energy limit to operation that is O(2), O(4), O(8) etc. and thus have a guaranteed access time based on the database size. It would be necessary to throttle on data base size, but I think I can add hardware design methods that scale power down continuously.

Memory parallelism is a quick accelerator, but the software is not able to use it and that is why I am forking.

I can easily patch the kernel to allow me a diagnostic interface, if it isn't already there. My guess is that there is a debug mode that Linus uses. So this week is kernel to kernel hacking week.

CAM also allows some very odd accelerators in video and communications and cryptography. I have a new crypto method that is much easier and safer to use than the public/private key methods. In video it should be obvious that if you have a 3 factor CAM, it is possible to identify position in 3 space with an O(1) process and with proper FPGA interface to the CAM it can perform multiple write, cascade read.

It seems a real waste to pay for latest hardware that is just multi-cores on a single chip with a premium price, when cheap single cores can be made to achieve the same effect. The innovation of processors is going nowhere. There is only one way to add, xor and and.

CAM also allows the easy implementation of many AI algorithms. CAM is not patented and so it can be contract manufactured at any FAB.

ADDED: This link at wiki discusses a journaling file system and this was what I had decided was the best approach before I realized it was already done. The concepts there parallel my own thinking and I also have reservations about ext4 ( delayed allocations ). I like the idea of being able to address more space, but loss of data on a shutdown is not acceptable in my opinion. The erase before write new is not a reasonable way to deal with information IMHO. Since ext3 exists and is stable, I will use those algos with my extensions. This way an ext3 created by me will be exactly the same image on disk and the difference is in how the software handles the parallelism of access and requests. This way it is a drop-in replacement to an existing system and the extended interface will not change any part of the POSIX scheme except that it will receive only 1 CPU to operate sliced. Process that is structurally designed so can allocate parallel threads will use the remaining CPUs through a separate interface.

I would never use FAT or NTFS because of multiple drawbacks and I have had to fix some corrupted NTFS volumes and that method is a total turd and an accident waiting to happen. For something whose main feature is DOS ( literally meaning Disk Operating System ), it has always been the worst of all possible implementations. The Microsoft idea of write on read shocked me and literally made me say WTF out loud. That a trillion dollar company could actually do such a ridiculous thing does not reflect well on the company or its employees. At least AT&T made really good stuff, they just fell to Hubris like many people will do.

I intend to keep Debian, POSIX and ext3 compatibility, however some of the device interfaces that are not USB or networked, could be dicy. ATI drivers is a real issue and I will choose equivalent speed without any proprietary video hardware and just use CAM if I need a fast real time display. I am not sure that fast VR video makes any sense for things other than games as they do not sow, nor do they reap. I would say synergy, but I hate the word. This link at slashdot is about an open source video card and that appeared while I was trying to finish typing this. Weird coincidence anyway.

0 comments:

Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen