DNA sequencing price

I discovered a way to sequence DNA that could be mass produced for about $100 per whole genome with the ordered base pairs. I suppose the sequencing could be done in about 4 to 5 seconds if designed properly. I see that $1000 genomes may be possible in 2013, but this is changing so fast that I would guess that it is a race to the bottom and a lot of people are going to lose money. I see that nanopore was at about $4000 and will be obsoleted by new techniques. I just wonder what is going to be done with the data. There is already so much data that it is how the data is understood is the real challenge. It should be possible to identify the flow of changes in organisms and perhaps the boundary condition that allows mutation transforms. Obviously a mutation must be viable and happen in sequence with some vestigial weight allowed. The advantage of the technique that I realized is that it also allows it to be partitioned and recombined in new ways for experimental analysis of a slightly modified gene structure and how it influences cell process. It is a quickly changing business and the mitochondrial OS is certainly the next step. The ability to modify the genome in action allows a damaged genome to be restored on the fly as well as repairing a changed whole body genome to its original state.

Blender 2.6 is quite a bit different to use, but is better when a person gets used to the interface. It is more intuitive as well as having some nice new features. I tried out localized sound and that is pretty neat as well as how easy it is to implement physics. The interface has not changed enough to make it irritating, but it does take some getting used to.


I would guess that the "Ringworld autodoc" is a not just a real scientific possibility, but a given in the not so distant future.

Gedanken funk


I was thinking about length contraction with velocity again and trying to make an argument that could be used to consider the problem. I considered a ship that was traveling at a significant percentage of the speed of light. I then considered what would happen if a crack developed amid ship and it started to separate. Then I wondered if it was completely separated at the fissure and then if it were separated by some considerable distance. So then, when does it become an object that is subject to length contraction? It implies that the concept of time and length as applied to an existential entity is improper. If there is no reason to assume that covalent bonding implies some magical character to matter, then any two co-moving objects in the entire universe would be contracted. It is true that different parts of any conglomerate would have different velocities and electrons moving in wires would be at a different relative state. It seems absurd that a "frame of reference" other than some imaginary axis could exist and I also wonder what time clock would be applicable to an electron. What temporal effect would an isolated electron experience. It would seem that the description and application of relativity is incorrect. There is no doubt that temporal distortion does occur and that gravity influences time, however the application and its consistency is in question.

There are some very real questions about how the mathematics of relativity is applied and even the idea that primal origin of the universe may just be a situation where alternate hypothesis is never considered due to the focus and peer pressure of making a living in physics education and publishing. It seems there is a focus on and education of that which is known as of this time and it will certainly change if new information can make it through the veil of wishful certainty.

I could offer no coherent solution to some anomalous data, but when the entire process is incomplete, it perhaps cannot be explained without further information about the universe. Perhaps it will never be solved until somebody can go an look to see if there is molecular hydrogen between the stars, or whether the bounds of the universe expand to infinity. It makes little difference in everyday life and my only interest in the matter is to provoke thought so that I might learn something new.

There were recent developments from the after hours janitors at the Large Hadron Collider (LHC) where they twitted that they had accelerated mustard packets to a significant percent of the speed of light which collided with a ham sandwich and discovered a new particle they called a poupon. It actually travels slower than zero and has no mass, but creates the flavor of many different condiments. They come in three colors which are gray and grey and gray which combines to a hadron which has charm,smell and is two thirds left handed yellow.

Star plots


Analyzing the spectrum of stars is actually easier than voices as it sings the same notes always. The plot is done with matplotlib and is the output of "SPECTRUM" open source located here. and reading with line.split() . It allows me to automate the creation of star spectra and then comparison by additive or subtractive synthesis with red shift and interspace absorption. An exploding star would have a voice over time and I would guess that variable stars would sing a fairly simple song. If I convert the emission line identity for H etc. to notes of a symphony then it might be a very dull presentation.

It is very similar to how I automate the production of 3D objects with blender and then compare to objects in an image for recognition. It is actually much less complicated.

Stars that speak

I am really impressed with the match add-on to Sonic Visualizer as it took samples from two speakers at difference cadence and pitch and matched them perfectly!! I should be able to match stars "voices" with this pretty easily.

I discovered Sonic Visualizer, which seems useful for sound analysis. I have also tested loris some more and it doesn't like files that are stereo or with any extraneous header info, but seems to work quite well. I am considering a merger of opencv and frequency spectrum plots to see if opencv can identify the face of a star by its spectrum. I would assume that if the same framework was maintained, it would work. I am also testing to see if I can generate a spectrogram of words by various speakers and see if they can be found to match with opencv.



import loris filea="AiffSample" #Load the input file fin = loris.AiffFile( filea+".aiff" ) samples = fin.samples() sr = fin.sampleRate() #Configure the analyzer component FUNDAMENTAL = 415.0 myAnalyzer = loris.Analyzer( .8 * FUNDAMENTAL, FUNDAMENTAL ); myAnalyzer.setFreqDrift( .2 * FUNDAMENTAL ); # analyze and store partials partials = myAnalyzer.analyze( samples, sr ); print 'found %d partials'%( partials.size() ) # export and save to SDIF file fsdif = loris.SdifFile( partials ) fsdif.write( filea+".sdif" ) # synthesize sampsout = loris.synthesize( partials, sr ) # export samples to AIFF file faiff = loris.AiffFile( sampsout, sr ) faiff.write( filea+"2.aiff" ) exit()

The sound of burning fusion and crushing death


I am experimenting with sound because it is a spectrum of frequencies like the stars and galaxies and something is to be learned by knowing how to analyze and produce that matrix. The image is the new Cecilia4 python interface to "csound" and can be had at this link to Cecilia4.

export OPCODEDIR=/usr/local/lib/csound/plugins

That must be in ".bashrc" or it does strange things to the machine and it cries.

While thinking about this it would seem that black holes are the ultimate example of inelastic collisions and in addition to the fact that they would reflect the gravitational and electrical field of the entire universe in inverse square proportion, they would also reflect the relative momentum as well as the angular momentum of all space. It would seem that whether space is expanding or black holes are dragging the galaxies ever farther apart, that the music of all the spheres in the heavens would sing an interesting tune if I can break them down into their interacting parts. The coherence of their sum should reveal some more information that could model the nature of the universe in time and space.


Finite Element Fourier Trees with Python

I decided to experiment with sound. While watching a movie, I realized that little sequences of music were used to indicate moods. I decided that synthesizing and recognizing sound would be something to test my new skills on. I downloaded a fourier spectrum analysis tool:



git clone https://github.com/vain/rtspeccy.git

The image is from a running trace of a sound. I thought that perhaps I could use finite element analysis to find the meaning in the medium. I thought that I could break the frequency and relative intensities into chunks in time, intensity and frequency as a 2D array that would be the items to select and combine for trees. Since there is no way to know what the right answer is, I decided it might be a tree within a tree and so I needed two speakers saying the same thing and solving for where their maximums intersect and how I change the data to make them resolve to the same structure. I did some Internet magic and found a set of 20 speakers, each saying a set of 10 different sentences and that should serve. Python has sound utilities and I can access the FFTW3 library just as well in Python as "C". It just makes it a little easier to script it in shell and Python and if it needs to be accelerated I can go to "C" and if absolutely necessary assembly, but the compilers are so good now it rarely benefits to do that.

It serves as a test for resolving some dimensional aspects of neutrinos and light or general EM from distant objects by automatic means. It allows me to apply the technique to identify patterns. It would even identify internal patterns to reveal a "voice" of a particular object like a particular supernova explosion category or many other relationships that become too complex to directly challenge with the amount of data available today. My hope is that I can pull "voices" out of FITS data bases of frequency, time and intensity data.

It seems that it would be sensible to solve a single temporal frame of elements as a factorial tree solution and then make each frame a factorial element in a second tree that grew sequentially in time, rather than attempting to do a factorial of all the parts at once.

I will also see if output from embrola and espeak festival can match the real voice data in some dimension.

An interesting site that I discovered is "loris" at this link as well I will provide this link to https://github.com/vain/rtspeccy.

Big trees fall hard and kill the python

failure to create cairo surface: invalid value (typically too big) for the size of the input (surface, pattern, etc.)

I couldn't really include the largest graph as it would be too much. ( It is 22,000 x 1,000 pixels) or (about 80 M data) The continuing tree tests found the limit of graphing in python and perhaps dot itself.




[9, 7, 8, 25, 2, 8, 9, 3, 6]
[1, 3, 5, 3, 3, 1, 5, 1, 2]
['vase', 'coin', 'ring', 'gold', 'copper', 'lead', 'painting', 'silver', 'tv']
Dictionary Length 312 2 to the nth 512
Scope of solution 206


I used some csv code to input arrays as it is easier.

r = mlab.csv2rec('/home/user/tree.csv') N = len(r) for row in r: verts.append(row) for vert in verts: values.append(vert[0]) weights.append(vert[1]) item_names.append(vert[2])

Below is the csv file that was usable without overflow. I am sure I can do larger trees, but graphing them is a problem unless I do something else for large trees. It is interesting that the running time has everything to do with the scope of the solution and not the size of the problem. I would guess that this has something to do with sparse matrices as well.

value,weight,name 9,1,"vase" 7,3,"coin" 8,5,"ring" 25,3,"gold" 2,3,"copper" 8,1,"lead" 9,5,"painting" 3,1,"silver" 6,2,"tv"

Graphviz trees and python recursive knapsack

Added a few embellishments to the code for clarity to include title and number of items in the pack as well as ordering from top to bottom.



Everything is easier with the python as your friend. I assumed correctly that an interface to "GraphViz" dot language had been implemented in python. So here is the tree solution from the previous post ( generally) and it indicates the maximum or last maximum knapsack in gray. It incorporates trees, knapsack, directed graphing, python and even recursive programming. So this actually solves the knapsack in general. Just an exercise in programming.



import pydot graph = pydot.Dot(graph_type='digraph') home_dir="/home/user/" values = [9,7,8] weights = [5,3,2] item_names = ["vase","coin","ring"] best_color="gray68" best_style="filled" best_shape="box" knapsack_holds = 5 tree_depth = len(values) pack_best = 0 node_best = pydot.Node("No win", style=best_style, fillcolor="red") def create_node(source_node,dest_node): edge = pydot.Edge(source_node,dest_node) graph.add_edge(edge) def create_best_node(dest_node): return pydot.Node(dest_node, style=best_style, fillcolor=best_color, shape=best_shape ) def create_node_name(level,pack_free,pack_value): global pack_best global node_best node_a = create_best_node("Level %d Weight %d Value %d" % ( level, knapsack_holds-pack_free, pack_value )) if pack_value>=pack_best: node_best=node_a pack_best=pack_value return "Level %d Weight %d Value %d" % ( level, knapsack_holds-pack_free, pack_value ) def make_node(level,pack_free,pack_value): source_node=create_node_name(level, pack_free, pack_value) if pack_free-weights[level] >= 0: dest_node=create_node_name(level-1, pack_free-weights[level], values[level]+pack_value ) create_node(source_node,dest_node) if level > 0: make_node(level-1,pack_free-weights[level],values[level]+pack_value) dest_node=create_node_name( level-1, pack_free, pack_value ) create_node(source_node,dest_node) if level > 0: make_node(level-1,pack_free,pack_value) if __name__ == "__main__": make_node(tree_depth-1,knapsack_holds,0) graph.add_node(node_best) graph.write_png(home_dir+"knapsack_tree.png")

Graphing Bellman

Trees and their implementation in Python make the concepts easier to understand. I use the Zim wiki along with "kate" editor , MIT class 600, Ipython, along with many other open source programs. The graph was created in Zim which has Python extensions. I reviewed the MIT 600 course again today and there are some very interesting things about probability and logical fallacies. I particularly enjoyed the anecdote about shooting at a wall and then painting a bull's eye afterward to make it appear as if the person was a perfect marksman. A related reference to the "minimum principle".



digraph KnapSack { "Index"->"2-5-0" "Objects [A,B,C]+Weights [5,3,2]+Value[9,7,8][]"->"Weight left"->"2-5-0" "Total Value"->"2-5-0" "2-5-0"[color=green] "1-5-0" "0-5-0" "X-5-0"[color=red] "2-5-0"->"1-5-0"[label=" Leave 2" color="orange"][color=blue] "2-5-0"->"1-3-8"[label=" Take 2" color="orange"][color=blue] "1-3-8"->"0-3-8"[label=" Leave 1" color="orange"][color=blue] "0-3-8"->"X-3-8"[label=" Leave 0" color="orange"][color=blue] "0-3-8"->"X-X-X"[label=" Leave 0" color="orange"][color=blue] "1-3-8"->"0-0-15"[label=" Take 1" color="orange"][color=blue] "0-0-15"->"Y-Y-Y"[label=" Nothing" color="orange"][color=blue] "0-0-15"->"Y-Y-Y"[label=" Nothing" color="orange"][color=blue] "1-5-0"->"0-5-0"[label=" Leave 1" color="orange"][color=blue] "1-5-0"->"0-2-7"[label=" Take 1" color="orange"][color=blue] "0-2-7"->"X-2-7"[label=" Leave 0" color="orange"][color=blue] "0-2-7"->"X-X-X"[label=" NO" color="orange"][color=blue] "0-5-0"->"X-5-0"[label=" Leave 0" color="orange"][color=blue] "0-5-0"->"X-0-9"[label=" take 0" color="orange"][color=blue] }
numCalls=0
def maxVal(w, v, i, aW):
 #print 'maxVal called with:', i, aW
 global numCalls
 numCalls += 1
 if i == 0:
  if w[i] <= aW: return v[i]
  else: return 0
 without_i = maxVal(w, v, i-1, aW)
 if w[i] > aW: return without_i
 else: with_i = v[i] + maxVal(w, v, i-1, aW - w[i])
 return max(with_i, without_i)

# def maxVal0(w, v, i, aW):
m = {}
#return fastMaxVal(w, v, i, aW, m)

def fastMaxVal(w, v, i, aW, m):
 global numCalls
 numCalls+=1
 try: return m[(i, aW)]
 except KeyError:
  if i == 0:
   if w[i] <= aW:
    m[(i, aW)] = v[i]
    return v[i]
   else:
    m[(i, aW)] = 0
    return 0
  without_i = fastMaxVal(w, v, i-1, aW, m)
  if w[i] > aW:
   m[(i, aW)] = without_i
   return without_i
  else: with_i = v[i] + fastMaxVal(w, v, i-1, aW - w[i], m)
  res = max(with_i, without_i)
  m[(i, aW)] = res
  return res
weights=[5,3,2]
values=[9,8,7]
print fastMaxVal(weights,values,len(values)-1,5,m),numCalls
weights=[1,5,3,4]
values=[15,10,9,5]
print fastMaxVal(weights,values,len(values)-1,5,m),numCalls
weights=[1,1,5,5,3,3,4,4]
values=[15,15,10,10,5,5,5,5]
print fastMaxVal(weights,values,len(values)-1,15,m),numCalls
weights=[1,1,5,5,3,3,4,4,1,1,5,5,3,3,4,4]
values=[15,15,10,10,5,5,5,5,15,15,10,10,5,5,5,5]
print fastMaxVal(weights,values,len(values)-1,15,m),numCalls

Besides the graphing of functions, it is possible to take the data from a decision tree in Python and write it out to plot it as a digraph. Using something new I learned it is possible to create a RoseGarden midi file and a video of the changes to a graph to indicate how it is processed using dvd-slideshow and ffmpeg to create a complete video presentation with annotated source and speech..

dot -Tps digraph.dot -o dgraph1.jpg

Square Pythons

A reference for using the polyfit is here at scipy.



import numpy as np import matplotlib.pyplot as plt powers_x=4 array_size=5 a1=[] a2=[] for s in xrange(array_size): a1.append(s**2) a2.append(s) k=np.polyfit(a1,a2,powers_x) inds = np.arange(powers_x+1) fig = plt.figure() ax=fig.add_subplot(111) ax.plot(inds,k )

Now fitting the curve using poly1d.

Contributors

Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen