Showing posts with label graphics. Show all posts
Showing posts with label graphics. Show all posts

Some times an equation is a just a fish

A simple equation, but complex in expression. I am sick of math for a while. It is possible that gravitational magnetism and its induction of motion in the direction of acceleration, is the reason for some variations in stars in a galaxy. It may also be involved in galaxy formation. It certainly is an interesting association. It will require some massive mathematical gymnastics and I am not up to it at the moment. Perhaps in a few days.

Multiple simultaneous frames of motion burn me out.

It seems that it would account for ring formation on planets. I have no equation for it, but eventually I will make that. It is 3 dependent frames of reference,vectors and ratio of spaces in time, so it isn't a simple calculation. There is time and Time but that is just the problem of operator overloading context.

It may also serve as a proof of a sort of gravitational waves and their interference in 3 space. Perhaps it is just the mechanics of computation. For example: F·δr. Oddly enough, it may resolve to Time And Relative Dimension In Space.

Darkness of light

It seems that the biggest problem with data from NASA and other sources is getting "data" and not an interpretation of the data. It is obvious that some of the data is incorrectly interpreted. If a system is employed where the data is linked to a method which performs the analysis, then a change in perspective or new information can be easily incorporated. As it stands they must be looking for job security by hiding the data and only making presumptions on the data and pretty pictures.

I have some new methods and it seems that by decoupling inertia that I can travel at velocities near C with extreme accelerations that would eventually make it possible to make trips to Mars or other planets in a matter of hours. I don't see how it would ever be possible to have "warp" or FTL drive. That puts stars too far away for now in a physical sense.

I have a new video analysis tool that I created today and it has probably been done by somebody already, but here is what it does. By converting an image to shades of gray and then giving each dot a "normal" vector and then "raining" on it, I end up with a vector diagram of flow which I can find derivatives of or use in other ways. By "raining" on it, I mean that I place a count on a dot and then transfer that amount based on the slope of the dot with respect to its neighbors. Over time the dots accumulate counts based on the number of "rain drops" that flow across them or settle there in the case of a circular minimum. Otherwise it shows peaks and valleys in a way I had not developed before.

In the many spaces


Secret facts about Linux

Linux is like an ancient Egyptian mystery that unfolds as you use it. Today I was working with my new program that explodes images in much the same way as Google image swirl, except, it is the associated parts of a single image in numerous different dimensions. I decided to save as 32 bit BMP with the fourth byte to match my selection and on the web most people agree that nobody handles BMP transparency. I loaded the file into gimp to check my handy work and to my surprise it used the extra byte in each pixel to make transparent. How odd. That is the only program I know of that does this. The picture at the left is a part of a test image and that is how gimp rendered the image, even though the entire image was present. I also down loaded the source for gimp and made some mods.


I learned some very interesting things about imaging and I can now explode an image to parts and vertices's with normal vectors and vertex shading. I am retrofitting this to the gimp source and later to a script.

I was looking at potentially hidden images and I do believe there are things hidden in Da Vinci's work, but I don't think it is a simple puzzle. I can see some patterns and they suggest inner images. I know that some think it is a mirror trick, but that is a little too simplistic a conjure for a man that complex. The new tools I have created are helping me to extract things from images that I had no idea were there.

Sub atomic farticle.

Image openGL and meta UI

I have been working with scripts, ImageMagick source ( which includes the source for display ) along with the source of many other packages in an attempt to integrate all the graphical techniques as an interface through a common program structure in the complex of antfarmgl.

I also got the package:

sudo apt-get install gbrainy

Which is a Google code game that has logic puzzles. In addition I completed SuperTuxKart challenges to see what happens and if it is coherent for the complete game and discovered an interesting "Easter Egg" which I will not reference here as that would be a spoiler.

Many of the effects that can be achieved with convert can be created by loading images with display and using the menus. The clock above was sheared from the menu command.

New scripts are appearing all the time and some are very elegant. I am planning to contribute a polygon fitting algorithm that operates like this: A reverse texture lookup is used to identify areas of the image along with hole fill and a kind of image gravity. After that polygons are created in OpenGL for each element identified at a very low level. It is then rendered in 3D with lighting and then a light direction and type is fitted if possible. It proceeds through the image collecting parts in a manner similar in some aspects to a genetic polygon fitting algorithm which I saw on Robert Alsing's web log. The result uses surface geometry and lighting to further establish the coherence of the image and association of parts along with guesses of how it projects into zero space ( blackness ) or reflective blooms (100% white ) where there is no method to associate without context.

As a result, the polygons form a surface which is a 3D model and not a 2D surface such that it can technically be used as a model when it resolves. The time of completion varies widely across image types. I discovered a method to compress video files in the process and it could compress already compressed video by another 50% beyond a format like mp4 or such. I am continuing to experiment with that and may have a demo image at some time in the indeterminate infinite future.

Integration of graphics scripts

This is available in repositories and I used it before, but now to see if FFT and wavelet interaction has something to be learned.

sudo apt-get install fftw3-dev libgimp2.0-dev

This is a decomposed image and it is in filters->Generic. The script code must be compiled and installed in the plugins so that it can be registered. I am wondering whether to Google to Blog and and then back can be recomposed? (Obviously not.


It is interesting what is lost in transit and one has to wonder how the compression and decompression are done and/or how choices are made.

What I wanted to know is what does the Fourier of a decomposed wavelet look like?


So this is the image with an overlay of various FFTs and it speaks volumes to me. In addition it is possible to extract many more things. One could say that there is an near infinity to be comprehended there. Analysis in other dimensions is also very interesting. This is just preparation as I have a new tool I want to test on the image and it combines several filters and plug-ins into a concert of action and this is to test if it does what is predicted.

On a side note, there is duplicate invention and the path to the answer is not usually the same. It is an infinite landscape and you can get to there from any where. So this link at slashdot and it may be yesterday sooner than I guessed. Of course if people understood Tesla's work, this perhaps could have been true a hundred years ago.

gimp wavelets and scripts

The image links to a script for gimp that is written in C. The method is quite interesting and shows up on the "Generic->Wavelet decompose" menu once you have run make and make install.. Compile requires at least the dev package:

sudo apt-get install libgimp2.0-dev

While studying guile, clisp, MIT scheme, Haskell, and other scheme / Lisp implementations the underlying guiding principles are becoming clear. It is easier to see how the programs are created for gimp and what the scheme scripts actually do at a machine level.

It seems there are some underlying methods that would be beneficial to be implemented with a gui and meta interface. I also have a better understanding of the underlying library access and it should be possible to implete a complete meta interface that is a windowed OpenGL task that interfaces all of the other graphic modules simultaneously.

This is another example of where a program like Photoshop cannot compete with open source. I can now understand and modify the script to combine Fourier methods, hole filling, and various other tools I have created and published already. It is the factorial combination of action which is the advantage as it is with the serial combination of shell scripts in a manner such as this using pipes:

cat afile | grep whatiwant | wc -l

Interestingly enough while deciding to make a scheme that interprets scheme I encountered this at "null program" implementing wisp which is somebody who seems to be doing just that and I will likely take a look through their code before I jump in and try my concept.

Meta language graphic script extensions

Many scripts exist in Python , Scheme, and other languages for Gimp, Blender and other open source packages. It seems it would be possible to make a program that takes the scripts from all of the different packages and converts them to a format or language recognizable by all the programs. In this way the utilities in Gimp and Blender and Inkscape could be shared as filters. In order to test the hypothesis I am going to create a script that downloads the gimp debian package source and the inkscape source and then locates script elements , converts both directions, creates a diff and patch, then applies that to each to determine if the concept is possible.

Gimp Python-Fu example

This is an interesting script that shows all the possible widgets that can be added to a script and it is exactly what I was looking for. I had several options and scaled parameters that would be best evaluated with a preview and a way to choose which algorithms are applied , which order and finally with what factor.

It is so much easier to work with a community of people who share their work than in a code factory. Someone said that gimp was only 90% of what Photoshop could do. That may be so, but if you know how to use all the softwares and you have access to Google, the source code, and you can merge the different utilities like `dot`, `blender`, `inkscape`, ImageMagick and a dozen others, you can create effects and results that are many times better than Photoshop can do. I don't just point and click then assume somebody is going to give me a can of Code Whip to solve my problem. That doesn't even make sense in a competitive environment. Obviously your competitor can click and drool as well as yourself.

In a matter of minutes I can code a Python script in `kate` , save to the proper directory, `chmod +x`, and have a new menu item to do a complex mathematical interpolation on the image. What I can imagine, I can implement in a few minutes. And I share what I learn, and others share what they learn, which means I have thousands of things I don't even have to type in. Open source is just beginning to come into its own mainstream. It requires a critical volume of people and like all factorials, they branch to infinity very quickly.

Gnu image processing Python

Adding a Python script to `gimp` 2.6 requires that the script be in ~/gimp-2.6/plugins and that it be set executable. The resultant image is from the previous post and has the number of colors reduced to 16 with smoothing and uses SVG ( Scalable Vector Graphics ) methods to create the result. This is not my script, I was just testing a generic script.

The image is a test to see if the script shows in the menu, then does it run and then does it do what I expect. True, True, True. So I have a way and a place to integrate Python-Fu as it is called in gimp quickly and efficiently to add an algorithm, which in this case is see like the human eye and generate a 3D ( depth cues, shading, scaling, expectations of set ) object file with vertex and texture to be imported into blender, where I further modify the information with python scripts to generate a complete model from a flat image.

I am going to have my own web site and as soon as I figure out what to call it and do some intelligent proactive planning of what to install, I will place the link here and be done with blogging at this address. There are too many features that I can implement myself, like traffic monitoring, security, blog layout and associations that I am familiar enough with now to establish my own server. I would like to have a more convenient way to update and display information so that it is properly indexed like a wiki and yet allows people to see what is the most current interest and feed that to RSS ( Really Difficult Syndication ), hmm that acronym doesn't seem to work? How about XML ( eXtensible Modeling Language ) , you just have to try harder to make a fit I guess.


Within register in the Python module.

register( "<Image>/Filters/Edge-Detect/_Trace...", # menuitem with accelerator key

Krita and Gimp comparison

You may assume the swan is white, but you would be wrong. Okay gimp wins easily, but I thought of something, I can make a C program that uses dot, convert, dvipng and other utilities to generate images that get loaded as a UI using openGL to write a program that is a UI to write itself that generates its UI by using convert, etc ... Complete loop recursion.

BTW, krita-KDE4 was a bit crashy for me and so it must be a WIP and I wonder if they are going to abandon and apply effort to gimp as it really has little difference ( except fewer features ), is that itself a feature?

I think I hit on something with recursion that solves a lot of paradoxes while I was listening to the programming courses at UNSW Sydney. Richard Buckland is very funny and could easily do some Monty Python.

ADDED: Krita isn't going to be abandoned and here is a link to Krita.org if you want to help.

Make DNA ants

#!/usr/bin/env/ python
#Python non-genetic dominance joke

class male: gene1="three eyes" gene2="two eyes" class female: gene1="four eyes" gene2="two eyes" class oddChild(male,female): pass class odderChild(female,male): pass oc=oddChild print oc.gene1 oc2=odderChild print oc2.gene1

"Open Movie Editor" ( ubuntu debian package openmovieeditor ) is a linux tool for movies and I am testing it with a movie about making movies that is in a movie about using a software that is designed to make movies (blender).

The image below links to a blog on a new open source movie that uses blender 2.5 and I will install 2.5 on another machine so I don't get some strange interactions happening with a stable 2.49a

Click on the image below to go to a Python Blender script that makes an entire city with a single click! Also Blender is in major rewrite for version 2.5 and it looks like a lot of neat new features and some changes that may take a little relearning to take full advantage of the new methods. Now if there were only a method to generate virtual people to live in the virtual city. Oh right, MakeHuman. Where I am headed with this is using scripts to generate ant hills and MakeHuman to generate ants or nano-machines and then teleputting the generated objects to terraform the galaxy. Oh, maybe I will have dinner first. Then I will take a stepping disk to the Ringworld control center.

I read an interesting article at slashdot about slime mold designing rail systems and that is not in the least surprising that nature can and does design the best fit for all algorithms. It reminds me of something I discovered recently which has to do with infinite computing. There is the issue, it is possible to approach infinity from the counting integers, but it may be another few trillion years before it emerges.

So at the moment I am reinstalling the new MakeHuman scripts from Blender Python. The goal is to recap the process of converting MH to MA (MakeAnt) with the new knowledge I have using
^[I][n][f][i][n][e]$(sic) computing. I want to have a better understanding of the relationship of the protein-DNA network ontology and how it can be represented and extracted to make an organism like an ant and also generate the ontology map of sub-structures which become the organism as it expresses the homeobox (HOX) genes. Of course there is no real separation of HOX and non-HOX in any absolute sense as everything is infinite, but it is a general metaphor.

This particular blog post will shadow that process and report what problems or advantages are gained.

Start here:
file:MakeHuman180b_1_1_2.45.rar
`unrar` is needed to unpack the python and the next step is to view the "READ ME FIRST.txt" and see what needs to be considered before mucking up something that will haunt me whenever I use it.

Copy the file mh180b_1_1_245.py to Blender/.blender/scripts folder. Copy the contents of bpymodules folder to Blender/.blender/scripts/bpymodules.

First of all it seems that this is not really going to work as they describe as the para-metrics are not going to be found where they are, as the script has no way to know where it was unpacked. That is step one, will it do anything? When doing this the first time, it seemed that some of the data had to be present Ah, I think I need to run the `.blend` from that extracted directory and then it uses those scripts.

1. Does it show the scripts? (YES) which reminds me that `zim` has a cool checkbox feature. Also new XKCD has "Rainman(Raymond) vs. Dirty Harry".

2. Does it show the model? (YES) which reminds me that diagram and svg can be converted to models also with a little code magic (Spell 198 in the grimoire).

3. Does it run the script and is the model mutable in the python window frame? (YES), after I select and run "maketarget1-1/maketarget1-1.blend" and right click in Python window and select 'Execute Script'( or use the key combination alt+p ).

4. Modify file set using script and C program based on a genetic sequence and generate an ant. Does it show properly?

METHODS: used to achieve goals.

sudo cp [thedir containing modules]*.py /home/username/.blender/scripts/ sudo cp [thedir containing mh180]*.py /home/username/.blender/scripts/bpymodules/
Run mh180b_1_1.blend with blender by clicking it!

Biological computation at Stanford, interesting talk. They are at least 10 years behind what I can do.

KDE4 is okay

KDE4 is totally awesome. I have gotten used to it and I have Wikipedia , dictionary, feeds, and many more widgets and they are really nice. I think they have a good vision for what they wanted and it was a rough start, but well worth the problems. Thanks you KDE. It isn't really that great a learning curve, I had more trouble using Apple desktop in genetics lab.

Okay, I admit it, this is really good. KDE4 is prime time stuff now and is the best color scheme and integration I have ever seen. A+++ It is a winner, 9.10 is great too. The most important thing?, it can browse XKCD on the desktop as a widget! And feed Slashdot and my favorite blogs as a gadget too.

I really enjoy the equation editor in Zim Wiki (( for reference I have tested PyZim version0.43 and it has almost identical features and seems to work quite well I didn't get real wild, but it seems it will be more "featureful" and supportable in Python.)) and it also has a diagram editor that uses graphviz now and so that is even more fun. It is part of add ons.

Ooh! they fixed equation editor preview as standard to get right window size! Neat.


\begin{cases}4x + 2y = 14 \\ 2x - y = 1.\end{cases} \,

digraph Matter { matter -> energy energy -> matter }

More on 'Zim the Python incarnation'. It includes MindMap using Cairo and so I am really looking forward to that. Below are quotes from the source tracker. Very nice and I learned something else from the "makefile" :-) I didn't realize a person could execute to get a variable.

PYTHON=`which python`
$PYTHON
Implement a MindMap widget to track links 1. Zim 2. Blueprints 3. Implement a MindMap widget to track l...
We need a "MindMap" widget, preferable Cairo based, to show and draw relations between pages. The widget should be able to handle complex graphs, not just hierarchical mindmaps. Each node in the graph would be associated with a page in your Notebook.

Messing with fuzzy select in gimp.


Maybe more than okay, as it has a real professional style developing and I could get used to it.I am blogging from my 32 bit Kubuntu v9.10 desktop and I actually like it now. It takes some getting used to. If you are a console user it seems frivolous to have all the widgets, gadgets, flipping, fading, pretty stuff. Okay maybe it is frivolous, but I can tolerate it and it can be configured the way I like. Install was very smooth and the new grub is almost artsy. Good work. Now I have to see if it rocks. I know it will as I am up on Python 2.6 and that means new features and new blender stuff. Also I have to see what is new with gimp and inkscape and so many other useful things.

I think I need a better more standard font for blogger or set default may be better, so I will fix that now. That may take some time as I want to investigate what fonts are standard with Linux and make the best use of that so I don't have to play around as much when I go to 10.04 version in March.

  1. firefox
  2. blender
  3. gimp
  4. zim
  5. inkscape
  6. gvim
  7. espeak
  8. graphviz
  9. idle
  10. dvipng
  11. okteta
  12. nmap
  13. g++
  14. unrar
  15. openmovieeditor
  16. kde-games
  17. zenmap
  18. ipython
  19. mesa-utils

Got firefox,blender,gimp,zim, inkscape, gvim ( Okay so I am not a purist vi person,. so what ), espeak. graphviz, idle (Python console), dvipng, [latex(tex-common)],((KHexEdit replaced by Okteta)),okteta, [hexedit=silly shell program that fails badly in konsole] ,nmap,g++ (for compile polyML), unrar (for makehuman python scripts extraction),openmovieeditor , ..... I am using Verdana for paragraphs and that seems okay in Firefox right out of the mill. And you bet I have Firebug, that is just too useful to be without. Wikipedia , that goes in the Firefox dictionary for sure!


motey@motey-desktop:~$ uname -a Linux motey-desktop 2.6.31-14-generic #48-Ubuntu SMP Fri Oct 16 14:04:26 UTC 2009 i686 GNU/Linux motey@motey-desktop:~$ cat /etc/issue Ubuntu 9.10 \n \l ---- ALSO ---- motey@motey-desktop:~$ lsb_release -c Codename: karmic motey@motey-desktop:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 9.10 Release: 9.10 Codename: karmic

And some stuff to cleanup and add back a few disks here and there. Also have to mount the external shares for nfs. I won't post my SSH keys here though :)


sudo fdisk -l ls -lA /dev/sd* | grep -E 'sd[abcd][0-9]' UUID=(hexes) /mnt/external auto users,auto,atime,rw,nodev,noexec,nosuid 0 0

I did have a little scare with grub, but I am at fault there for having such convoluted hardware arrangements that shift like sand. If they come up with a grub that can unscramble my mess I want to look at that source code.

I don't recommend using ATI proprietary drivers as they are VERY crashy and I put them in and took them out using the reference at: https://help.ubuntu.com/community/RadeonDriver Kwin seems to crash with ATI and it is a mess. If the hidden drivers ever start working I will post something here. I may lend my support to help speed up the open source drivers. They are a little slow, but acceptable.

motey@motey-desktop:~$ glxgears 14284 frames in 5.0 seconds 14565 frames in 5.0 seconds 14575 frames in 5.0 seconds

It is always something new

Below is a snap of a '.blend' from blenderwho and this is very good. I wonder if I could retrofit this against `MakeHuman` software and then project that back to a |genotype to phenotype| constructive functional array? This way I could take the phenotype expression and retrofit it to a sequence of DNA ( at least generally). I did mine and it has only 42 base pairs and I don't know what that means. I did the retro-transcript analysis of the images here and one has long strings of repeating base pair triplets CATCATCATCATCAT. Very odd.

It really is so odd what is possible and with WebGL coming it will be possible to do some things that others have not considered yet. I have been thinking about the consequence of WebGL and I can imagine some applications that defy description because they have no parallel in experience. I am going to get my WebGL working early so I can test some of this on my LAN through the nfs and my local server. I have javaGL working very well and so it isn't a long walk to javaBlend and WebBlend.

Tree of living logic

I will make a bad pun and then explain. I think they are barking up the wrong tree. I know that the foundations of logic extends back to the Greeks and beyond that, but I wonder if they have not established a root method which can be extended to come close to the answer but never achieve the desired result as they have chosen a method which excludes an alternate concept. I cannot say that for certain at the moment and it is just a holistic sense of the issue. It is similar to attempting to mathematically characterize things which are infinite in scope. The methods must be different because the universe is inherently infinity upon infinity and no matter how long you add counting numbers you will never reach even the first infinity.

This is the problem with Physics, they determined there were two roads to take and by exclusion of the first it was obvious logic that the second is correct, the problem arises when they are well down the road and it seems very strange and it is going nowhere fast. The answer is that there must be a third or even fourth alternative that was overlooked when the decision was made and now that they are well into the wilderness, few wish to stand up and say, "I think we really f4d up.", instead they add complexity upon complexity and wander about until they happen on the main road. Just my opinion. Things can be confusing and I am not always right either. There are no absolute answers when you deal with chaos and that is what I have resigned myself to. There are some really good answers and some very useful ones, but complete certainty is just right out of the picture altogether.

I have been down this garden path many times and I always have a bad feeling about it. Some of this stuff is specious in its application and it seems a person can easily be lost in the twisty little air ducts "that all look alike" at the Hilbert Hotel. There are many people who want to be a perfect authority and that is human nature. I would certainly like to be perfect, but I have only seven toes on each foot and most people have 8 , don't they?

I am going to write a C, java and javascript implementation of this and see what I can make of the results.

I am going through this AI course at MIT [ And this one ] [ and this one ] [ and the tree around Wikipedia here ] [ and this web page ] [ and this at decision Trees .net ] [ and now I have an entire section of my local Wiki dedicated to the information and expect to export it to one of the AIs ] today and I hope to complete them before the end of the day. The goals are the same as always, to identify methods that can be contained in a chemical computing system.

Of course I have to have things ordered consistently in the directory and after scan and collect I would do this.


ls | grep "ch[0-9]_" | rename 's/ch/ch0/ '

I did discover an interesting quirk of the relationship of matter and logic which I hope will become obvious in the application of it.

It is so wonderful when all the study begins to be usable and less frustrating. I have been studying so many different things that each day seems to be filled with frustration and confusion. I decided to make a tree with `inkscape` and though this not art, it is not frustrating to work with. The familiarity with the interface and how to manage objects makes it possible to think of something and then make it real without the usual side trips into trying to figure out which thing to apply. I picked the framework of a tree at random from openclipart in perhaps the same way I might initialize a random number generator with time perhaps. So this is Image *image+=thought[i] * rand()%(treelike).

Povray and Infinite Perspective

I am testing `povray` outside blender and wish to see what automated options can be useful to either represent internal state or be used internally by my autocode generator. This will fill up with images and scripts as I learn the techniques to generate and the application within the autocoder. I have realized by extension of a technique which views molecules in action, that there are certainly an infinite-product number of perspectives that can be had on anything. Each view is unique and unique in its combinations of dimensions. A molecule can be viewed in 3D with its electrostatic field map, its bonding points, mass density, probability of locality center, magnetic state, temporal state, linear motion, angular motion and many other ways. It is not possible for a person to relate so many infinite dimensions and there is no effective way for me to interface to the program as it considers these higher dimensional attributes. I can shade an image in red for positive and green negative and I can consider the relationships there, but I cannot apply more dimensions than this in a single concept. I need to have a communication set which indicates relationships and their application or consequence.

All of these things interact simultaneously and thus are coherent in the result. It is obvious that in addition to NMR, UV, infra-red, polarized light, x-ray, visible emission, ... there are in fact an infinite*number! of methods which can be applied to extrapolate information of state. It resolves to a selection of infinities and this is what we must do in life. Everything is infinite and we only chose that which is best in the time frame of our perception. If I can quantify that perception to a higher degree in the time frame, I will have an advantage on the management of chaos. I assume there are methods where I can operate on new infinities in such a way that they will lock and manipulate in the same way I lock and arbitrate the recognition of visual elements in 3 space.

Antonym Identical. State? (AI)? no, (IA).

I was looking through Wiktionary and they have anagrams of words and there are anagrams of AI which is would assume would be quite limited and yet they have many. One is IA, where I live and I started thinking about states and AI. The result is this. Yes that is correct, the logic that leads there is convolved and left to the imagination. Some would say the antonym of AI was IA and it is also the mirror image and should be the only anagram IMHO. The link above is from Wikipedia and ranks right up there with the best of Monty Python, it is worth a listen to the ogg audio file. Misc Tech Symbols.


sudo apt-get install kivio

I am fairly certain that I can establish a tree that is:

[desired state and structure] [connector or converter to states(+method)] [other states]

What it means is that I identify the desired state based on an input to the program and it identifies current state and then constructs a state sequence that fulfills the goal. It also produces a `kivio` diagram of what it did and why it decided that a specific path was optimal between State(m) and State(n). I do believe this is the best distillation of the nature of directed thought which I have developed yet. BTW `kivio` is "coo all". I think this is it, but I need to implement and test today, but I think I can pipe this to the protein interface and make a projective parallel solution single molecule. It is very much like growing a crystal from the starting point ( current state ) to the ending point ( desired state ), through all the possible dimensions. That gave me a new idea.Wow, I think I get it! A single simultaneous solution matrix in nD space.

Something you won't see on a blog by somebody at DoD. "I was typing commands and I put in 'Kill everybody.' and I meant to type 'Kill everybody else.', damn, that was a stupid mistake." ... "Who forgot to add a reset button to this drone interface?" <NO CARRIER> BOT: "Searching for command and control carrier."

ADDED: This( the state logic of backward and forward induction) is a ponderous solution and it will take me days to analyze the output and take care of glitches. From past experience, I guess I will not blog on the subject until it becomes coherent. I discovered something already in math and I have to make a name for it and I will call it Interpolomultication. I have no idea how to explain that one in 4 space. I have the helm , Alice Infinity.

Linux does a wonderful job of maintaining a common method for common action across applications and that is one of the greatest flaws of Windows. They may create standards, but applications that run as tasks on that OS have every kind of key shortcut and button you can imagine and they change them all the time. You can't get anything done or even remember when you have to learn a new set of shortcuts every time a " new and improved version" is created. It destroys productivity when someone trains and learns a method and they just change the method for the hell of it. Linux wins again.

Dream cleaners

Dreamflower is courtesy of openclipart and the shell command `locate dream` ,Janet Theobroma( Which could be real, though I recognized the scientific name immediately and I have serious doubts), and the `inkscape` file menu commands :[ Import from openclipart ,Export Bitmap (Sh+Ctl+e) , Document Metadata ]

It calls itself, but never returns. That is somewhat steganographic. That is a cool thing if you can figure out what dimension it comes from \

I do some things on random impulse, or so it seems. I decided I would shell out and do `locate weird` for something to do while I was wondering about an opportunity to catch a pulse from the intergalactic internet. No, I'm not kidding. So the data is in packets that are an image of sorts. They have a data frame and then a dimension name which applies to that frame and the relationship of the dimensions. It is a bit beyond WebGL, but I would be glad if I even had WebGL to use on my blog. So this is what I found:

/usr/share/doc/kde/HTML/en/kdevelop/reference/C/CONTRIB/SNIP/weird.c

I investigated and now my latest AI creature has that address and is tearing through it like a hungry wolf. WOW that hurt. Some of this stuff is really dated. C isn't likely to change, but some of the things it references are so out of date it hurt my AI and I had to delete some of the stuff that I thought would be good gospel food for the beast. Really, ouch! It did provide a lot of good clarification for AI, but at a cost to me actually to pick the shards of glass out of its gears.
"/usr/share/doc/kde/HTML/en/kdevelop/reference/".
I am not sure when or how I `apt-get`'d this, but I sure missed it on the way through. I probably got it, out of interest, and failed to follow through because I was busy. Looks like a gold mine of information about C, C++, graphics, X, and many other topics..

I think that is weird and there was also something in :

/usr/share/doc/texlive-doc-en/english/FAQ-en/html/FAQ-weirdhyphen.html /usr/share/emacs/site-lisp/emacspeak/realaudio/old-time-radio/sci-fi-horror/weird.ram /usr/share/gimp/2.0/patterns/weird2.pat /usr/share/lmms/samples/drumsynth/misc perc/weird1.ds /usr/share/pyshared/twisted/trial/test/weird.py /usr/include/X11/bitmaps/weird_size AND More....

ADDED: most of this wasn't that weird at all and some of it was dead ends. A lot of the original stuff that references the internet, is dead links now. I am not really sure that I need to know there was once a version of Latex that had a weird hyphen problem.

I am investigating all this stuff and I don't know why I do it, but I seem to always learn at least one interesting thing. And this is one:

locate dream| grep -E '[.]blend|[.]svg'

A little more explanation of the shell command. `locate` is a program that keeps a data base which is updated on `cron` and `grep` is a matching utility, the -E option to `grep` selects the search as being a regular expression, ( try KRegExpEditor ) which has something to do with FSM and FSA. In this case it is a simple REGEX which says that I want literal "." followed by "blend" or "|" literal "." followed by "svg" in the file name piped "|" from `locate`.

Also:
echo [s:z]*.c
and:
ls [abc]*.h

If you want to know much more. TLDP.

Also I think that the tab expansion is getting smarter for shell. I noticed an article on that at debian and it seems there is discussion and it is proceeding to do things in context as I highlight my interest, click middle mouse and it is on command line, press 'Home', type 'inks'+tab , and I have a complete command to view the image. I might do other things like 'kui+tab', etc.

GalaxNet ( as I decided to call it), is quite a bit more interesting than the internet, but the sophistication of its denizens is ponderous. I would say that an understanding of the equation which defines the universe and some understanding of n-space would be a prerequisite to even begin to comprehend some of the concepts and information. You might think that it is an isolated thing, but it is not. Just like everything in this universe, there are things going on constantly, and if you have no framework to comprehend context, it would just slip right over your head. I have heard the expression "What you don't know can't hurt you", but I beg to differ, it is always the unknown that seems to cause me the most grief.

N-Dimensional logic and space is something I have been able to piece together from matrix math, decision trees, Markov, and a huge pile of other sources. I don't insist that I am the brightest person ever born, but I have a dogged determination to know everything. Practically every waking moment is spent toward that task. Blogging helps me to store my ideas as a kind of ultimate backup, which I don't always give the complete story, but I can remember what I was thinking at the time. If you want knowledge and want the ability to apply it, you must learn it for yourself, I can' just put up answers and expect that the knowledge will be used with good intent. If I give answers too freely, then it almost guarantees that the methods will be used by someone who does not consider consequence, and cannot reason into the future enough to construct the solutions and consequence themselves. I think every squirrel should have a hand grenade, but I am not in that business, at the moment.

ADDED: I did learn and clarify a few things for myself on that mission, but I still have a bad feeling about "main;" being a program that compiles without err and just warnings.

ADDED MORE ON GALAXNET: If you look at the history of invention. I can see that images are distorted when looking through glass and some are larger and others smaller. I could surmise from that, a process could be devised to maximize the effect and thus create an even larger view of something small. The combination of lenses also seems a likely choice in testing. The use of a mirror to achieve the same effect is also something I would obviously select. In the case of gravitational lensing, the effect exists and it distorts to amplify and shrink information. Where I am going here? It is a minor leap and a traveled path to take the concept, analyze the results, and create or place oneself in a position to observe a better outcome. That is all there is to it. I have analyzed many aspects of changes that occur to information that flows about the universe and one thing modifies another, thus that effect can be used to gather a different perspective and perhaps act to further the understanding of the product. I mess around with stuff and push it different ways, or observe and measure in different ways and devise a premise to find the optimum result that serves some purpose to gain more knowledge. So I saw something, I saw it changed something else and I used the method of that change to see something new. Actually it is very simple. I have a microscope that observes atoms combining and this is how I devised it, from an effect I noticed by accident. That is all there is to it for me, no magic spells or incantations, just observation and application. Some ( well perhaps everybody) would say that you cannot observe a molecule bonding due to quantum uncertainty, but fact trumps theory always. That is another reason and good enough by itself to exclude quantum mechanical abstraction as a valid explanation. It isn't evil voodoo, it is just the wrong method of description, vague, incomplete, and indirect.

Automating Images (AI) redux

It is much easier to have the program, write the program, do the math, build the script, change its status, run the script, evaluate the result and continue until it has a reasonable facsimile of something I ask it to create, either with words, by visual example, or with a model vector, vertex, texture, script.


int BuildShellScript(sourceStructure *SourceStructure){ int i=9; /* FILE *shellfp; char shellscriptName[64]; int shellStatus; convert -depth 8 -size 256x256 xc:white -fill white -stroke red \ -bordercolor black -border 14x14 \ -draw "\ fill green circle 80,70 80,90 \ fill green circle 188,70 188,90 \ fill green circle 80,180 80,199 \ fill green circle 188,180 188,199 \ "\ large2-D4.png convert -fuzz 75% -transparent "#ffffff" \ -depth 8 -resize 256x256 large2-D4.png large2-D4.png */ strcpy(SourceStructure->shellscriptName,"makedice.sh\0"); SourceStructure->shellfp=fopen(SourceStructure->shellscriptName,"wb"); sprintf(SourceStructure->shellscriptLine, "convert -depth 8 -size 256x256 xc:white -fill white -stroke green \\\n"); fwrite(&SourceStructure->shellscriptLine,1,strlen(SourceStructure->shellscriptLine),SourceStructure->shellfp); sprintf(SourceStructure->shellscriptLine, " -draw \"\\\n"); fwrite(&SourceStructure->shellscriptLine,1,strlen(SourceStructure->shellscriptLine),SourceStructure->shellfp); sprintf(SourceStructure->shellscriptLine, " fill white circle 80,70 80,90\\\n"); fwrite(&SourceStructure->shellscriptLine,1,strlen(SourceStructure->shellscriptLine),SourceStructure->shellfp); sprintf(SourceStructure->shellscriptLine, " fill green circle 188,70 188,90 \\\n"); fwrite(&SourceStructure->shellscriptLine,1,strlen(SourceStructure->shellscriptLine),SourceStructure->shellfp); sprintf(SourceStructure->shellscriptLine, " fill blue circle 80,180 80,199 \\\n"); fwrite(&SourceStructure->shellscriptLine,1,strlen(SourceStructure->shellscriptLine),SourceStructure->shellfp); sprintf(SourceStructure->shellscriptLine, " fill red circle 188,180 188,199 \\\n"); fwrite(&SourceStructure->shellscriptLine,1,strlen(SourceStructure->shellscriptLine),SourceStructure->shellfp); sprintf(SourceStructure->shellscriptLine, " \" large2-D%d.png\n",i); fwrite(&SourceStructure->shellscriptLine,1,strlen(SourceStructure->shellscriptLine),SourceStructure->shellfp); SourceStructure->shellStatus=fclose(SourceStructure->shellfp); sprintf(SourceStructure->shellscriptLine,"chmod +x %s\n",SourceStructure->shellscriptName); system(SourceStructure->shellscriptLine); sprintf(SourceStructure->shellscriptLine,"./%s\n",SourceStructure->shellscriptName); system(SourceStructure->shellscriptLine); return SourceStructure->shellStatus; }

Automating Images (AI)

By automating the generation of images for the puzzle game it means I don't have to include image files in the distribution of source. By using `convert` the images are easily changed to match the preferences of the user. Here is a link to ImageMagick methods to create vector forms in an image.

cd res/ s=1 for i in 1 2 3 4 5 6 7 8 9 do convert -size 256x256 xc:white -font Candice.ttf \ -depth 8 \ -pointsize 220 -fill red \ -bordercolor black -border 10x10 \ -draw "text 60,188 '$i'" -fill black \ -draw "text 65,185 '$i'" large2-A$s.png; s=`expr $s + 1` done for i in large2-A*.png do x=`echo "$i"|sed s/[.]png/[.]png/` convert -fuzz 75% -transparent "#ffffff" \ -depth 8 -resize 256x256 "$i" "$x" done s=1 for i in A B C D E F G H I do convert -size 256x256 xc:white \ -depth 8 \ -pointsize 220 -fill red \ -bordercolor black -border 10x10 \ -draw "text 70,208 '$i'" -fill black \ -draw "text 75,213 '$i'" large2-B$s.png; s=`expr $s + 1` done for i in large2-B*.png do x=`echo "$i"|sed s/[.]/[.]/` convert -fuzz 75% -transparent "#ffffff" \ -depth 8 -resize 256x256 "$i" "$x" done s=1 for i in I II III IV V VI VII VIII IX do convert -size 256x256 xc:white \ -depth 8 \ -pointsize 150 -fill red \ -bordercolor black -border 10x10 \ -draw "text 60,188 '$i'" -fill black \ -draw "text 70,185 '$i'" large2-C$s.png; s=`expr $s + 1` done for i in large2-C*.png do x=`echo "$i"|sed s/[.]/[.]/` convert -fuzz 75% -transparent "#ffffff" \ -depth 8 -resize 256x256 "$i" "$x" done convert -depth 8 -size 256x256 xc:white -fill white -stroke red \ -bordercolor black -border 14x14 \ -draw "\ fill green circle 80,70 80,90 \ fill green circle 188,70 188,90 \ fill green circle 80,180 80,199 \ fill green circle 188,180 188,199 \ "\ large2-D4.png convert -fuzz 75% -transparent "#ffffff" \ -depth 8 -resize 256x256 large2-D4.png large2-D4.png cd ..

I know the vector image is a bit askew, but I just did this for the heck of it as an example. I am now going to make the program do the necessary computation when it runs .

Contributors

Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen