Showing posts with label Markov. Show all posts
Showing posts with label Markov. Show all posts

Careful with that ACK, UGENE



Ugene is an open source package that incorporates HMM ( hidden Markov models ) and so I am trying out this package and intend to delve into the source a bit. It fits in the general interest in Markov chains and how it is implemented, and how it is applied in unique ways.

I am really impressed with the code, the methods and the interface. Very very talented designers. Below is a chromatogram which represents the automatic sequencing output which looks a lot like a fluorescent Sanger using a primer, which I have done. High throughput and parallel methods have since been implemented and direct view using my in vivo atomic microscope can be faster. Thus some probabilistic interpretation is in order, and a lot of overlap or multiple runs.


The under face of Alice Infinity (AI)


As part of a complete WebGL it is necessary to have the tele-present character of the agent AI. This is one part of the complete system. Another part is MakeHuman, MakeAnt and MakeGenotype. The methods of the essential Markov Models that underly matter can be impressed on a structure and for the sake of bandwidth, it is possible to operate the GL interface through the web and the work is done on the local machine while only the para metrics are transfered through the interface. I haven't checked WebGL recently and need to update my knowledge of that. When the face and body are complete and controlled by a remote interface, I will generate some video. I don't add sound to the video because it is so mood dependent. This video is simply intended to be informational and not mood or mode dependent entertainment.

It seems reasonable to generate a form of graphene using sub atomic methods. That material really seems to be the future of technology in many aspects.

I am working on some extensions to a voice interface that produces more realistic voice with the same para metrics that might be applied to a model image. In this way it is possible to do speech modeling language with speech elements. At some point the concept level must be transfered and not the product level.

Understanding Markov


The nature of the probability matrix is pivotal to understanding many different processes. The image shows what happens as the differential is integrated over time. The universe is best characterized by its changes and so are the relationships. Circuits, fields, momentum, and motion of all forms can be modeled and thus reversed to a model from its product. I think that I had not yet appreciated how this all related until now. The addition of logs, periodic, differential, and complex elements adds new aspects but does not change the basic principles. It relates to vector space in some very unusual ways. I feel that I have been here before when designing a solver for the "einstein" puzzle. It is easy to see how the rate of change is incorporated in the matrix multiplication process of a probability matrix. It does seem that interaction of sets and the vector rotation of a Rubik's cube could be defined in this framework as well as the solutions to the "einstein" game.

As time goes by it gets easier to use. I can see how it can be used for logic simulation (SPICE), though I had my own design with CAM that I feel was faster and easier to use. I wonder if the algorithms for matrix manipulation can be implemented in a type of CAM ( Content Addressable Memory). It would seem so at first glance. The Banach Tarski is still a bit sketchy for me and perhaps it is just a matter of relating it to what I know of the operation. There are some odd things that happen and action at velocities nearing the speed of light have a strangeness about them that rivals this. I can't say if what is said of that is the same as I only understand one of them. I am of the impression that quantum mechanics has some rough edges that may come from the math itself or from the way it is interpreted or applied. Frequentists and Bayesians. I was doing some LaGrange equations of gravity with other elements incorporated and simulating that and there was a realization there also. This is very much like the OpenGL cave logic world that I created. I am glad I learned all these things, they will be useful. I feel that I have ∇uated, ( scientific humor ).

I don't feel that anything can truly be learned and retained without seeing how it applies. That is certainly true for me and programming. Somebody can talk about OO programming and any number of algorithms, but until I write my first program to implement the effect, it just seems like noise. To me it just seems like teaching is a way to have a agreed upon name that describes something, but the learning comes from doing.

I have a good idea for a hole in gravity that comes from the many dimensions and I have also devised a universal language. The strange thing about the language is that it looks a lot like DNA and in my wild imagination it made me think of a communication system between advanced alien cultures that used DNA like communication and as a result it developed strange patterns in the communication channels some times and it had to be cleaned out. Like mold growing in the warm water of a nuclear reactor cooling tower. I would hope we aren't the fungus that fouls the universe, that would be an unfortunate position. Sounds like a good Twilight Zone episode to me.

I was thinking about Rubik's cube and matrices and decide that it would be possible to make a matrix tree solution and make it a solved game for number of moves. It seemed that 26! * 22 would do it and so I computed the flops to see if it was doable on my machine and 1030 flops is not in the range of any machine in this time or the near future. That is just a brute force, and I am guessing it could be finessed to 21! fairly easily. That is in the computable range of some super computers, but not my computer. I see that in the history of its analysis that sets and vectors have been used and even Hofstadter has had a hand on the cube. It seems that a divine solution could be had and perhaps I will consider that as it does apply to many other situations like possible quantum states or even nuclear physics and gravity in a round about way.

I think I am getting an idea of who knows what they are talking about and who is just tweeting their ignorance on the net. There are some real geniuses that have presence and some things I could learn.

Language of thought


I am considering Markov models and how they can be used to devise the causal and dependent relationships in any number of applications. The tools to display and manipulate those relationships exist, but a simple interface is lacking. I hope to create a simple framework that I can expand and extend to correlate relationships as well as data. It has many applications.
This course in applied probability at MIT has quite a bit of data on the subject.
The graph is gnuplot with a set of sample data using "pm3d at b".
This is another area where symbols and definitions interact.Symbols ∩ (&cap; ) and ∪ (&cup; ) describe set relationships and it does seem that if I am to ever use all of this in one single context that I must either make new symbols or use something more complex that is context sensitive. I could say set.intersect and use ∩ to display it when in that context and use my own symbol when dealing with it myself. It isn't to hide what I am doing, it is just too confusing to see the same symbols applied in too many places. In linear algebra the symbol Ax=b has a vastly different meaning and P(A) means probability of A in that context and if I were programming it would be function P with parameter A. Even within this sentence the concept of AND is used while describing the symbol ∩ ∧ ∪ ( &and; is also a symbol for and). If a person confines activity to a single specialty it is fine, but when a person tries to take advantage of the fact that these methods resolve to the same action in new terms, it gets odd. It isn't just i and j as it goes way farther than that when I do Python or C and deal with an array. [i][j] and then there is <i>italic</i>. I think I ( meaning me and not the Identity matrix or i the imaginary axis or i the italic or even i the common C iterator ) will make an extended symbol set for myself because it is like trying to write every word as a sequence of 0s and 1s because the language is not complex enough to express all the possibilities. It just gets worse in the future. I think that English is a stone age language. A vast majority of what is new was never known and as a result the language grows in the same way that it was created, like some bizarre multiply emergent mess. It is bad enough that virtually every nation on the planet has their own language, but to start having sub-languages for every science is too much. AFAIK IMHO srsly!

Contributors

Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen