Here is the C code that works for YUYV 4:2:2 which is a two pixel code embedded in 4 bytes being Y1UY2V. So Y1 is the gray scale of pixel one and Y2 is the gray scale of pixel2. The second picture video insert is r=g=b=y. After values are converted they must be clamped to the range 0--256. For example r=r&0xff
r =y + 1.370705 * v; g =y - 0.698001 * v - 0.337633 * u; b =y + 1.732446 * u;
I was doing V4L2 today and discovering how to use the interface. It is fairly straight forward and more technically correct than V4L, which makes it more complex to use, but more versatile. I was able to capture video and convert it to RGBA into a texture. The only problem is YUYV format, which is a real twisty little maze. The Y values are luminance and the chroma information is held in the UV parts. The data is chunks of 4 bytes as "Y1 U Y2 V" which are Luminance pixel 1, U, Luminance pixel 2 , and then V. The relationship of UV and Y is not a blog post length subject.
The next step is to create automated audio that is in chunks with a video position designator. In this way I can use espeak and embrola along with models and video and OpenGL to create a continuous stream video that is animated. Blender is nice and so is python, but direct creation of specific paths with "C" is easier at this point, since I have a handle on all the interfaces. It is easy enough to use the library functions to create what ever effect I need and incorporate physic for models and structures.
I can capture video, convert, and record video of complex scenes(or remote telemetry) in near real time.
I am not sure that having "errno" defined when I use V4L2 is such a good idea, as that is a possible variable that I might use in a program. (Well not possibly, already did and had a conflict on compile.) Other than that it seems very well structured and usable.
0 comments:
Post a Comment