Left is YUV to RGB in a texture. I find it interesting to see generally what the YUV does to the data. The right is the video read from an flv and converted back from YUV to RGB in a texture.
This test is to allow video to be pasted on textures and textures to become video and snapshots of scenes to become textures to become frames of video in recursion. The AV library has a steep learning curve, but so do other things like matrices, OpenGL, C, YUV, and the universe itself. This follows from texture generation as scenes of primitives and then those scenes are animated and become elements of the final production. By using Fourier transforms to generate sound in conjunction with models and animation, the next stop is complete presentations that incorporate espeak, mbrola, LaTeX, matplotlib, raw data transforms, DiGraphs, web pages, Zim pages, and a descriptor which is the script of a certain context. Interaction with the script allows decision trees to select the appropriate script path for a certain choice. I suppose it is more like a 5 dimensional script, because it extends in space and time as well as within the variations imposed upon it. It is very simple to add entertainment embellishments or exclude them when I become bored. That is the biggest problem that I have with any entertainment medium. I guess the ending, can't stand to see or hear it twice, and usually have better things that I imagine while watching. I really hate hearing music at the super market as it grates on me when it is not new (which is always). I already know it and have heard it too many times, it was new once and now is just irritating. Supposedly it has a calming and mood elevating effect, but for me it is like being poked with a stick until I can get what I want and get out. It does not generate a mood for me, except a bad one.
Some more good libav reference is Dranger and and update for deprecated img_convert() is at Tech Bits mainly for Apple, but works for Linux as well. I have been implementing a NURBS (Non uniform rational B-spline ) approach to video so that I can associate a video with a NURBS and kind of warp the videos based on my interest. Mainly for MIT courses that I have understood mostly, but want to review without going through everything so I warp parts of the time frame to fold the video such that sections actually disappear or reappear by bending the overall time frame.
There are some very odd things that show up as patterns when all the different matrices from different sources are multiplied together. I am sure they are significant when viewed in context of human experience. There are clues and vectors everywhere that intersect in such odd ways.