Artful Computing

FFMPEG

The essential point is that one can use an open source application called ffmpeg to stitch together a sequence of JPEG images into a video clip. ffmpeg and the libraries that support it are widely used in other open source video applications such as VLC media player - but here I use it as a stand-along application. it is available on all the same platforms for which Processing can be installed. (I have used it on both 32-bint Intel/Window 10 and iMac/OS X hosts.)

Nothing, of course, could be easier than making Processing write a series of JPEG images with file names containing sequentially increasing numerical parts, for example, using saveFrame() each time draw() is called, and ffmpeg is then easily configured to read such a file sequence. The main issue therefore is programming the generation of the desired image sequence.

The Practical Problem

it would certainly be possible to program a loop inside the Processing program that iterated through an algorithmically defined variation of the parameters controlling the image generation. I did it - and found that it became very time consuming to progressively adjust the algorithm to get exactly the timing and sequence of transformations the appealed above all the others. This is because a great deal of computation is required to produce each frame and if one has to wait an hour to see the effect of each adjustment the whole process becomes exceedingly tedious.

The Solution

Fortunately, in order to be able to reproduce successful static transformations (perhaps in higher resolution) I had developed a method of saving image transformation parameters (using the saveTable operation) to a comma-separated-values file. Furthermore, in order to explore the space of transformations I had designed in some interactive features, such that the movements of the mouse and some keyboard events could be used to progressively adjust the transformation. A good deal of experimentation is involved choosing values of the configurable parameters (the vector p) that lead to "interesting" images - and in my programs I use the mouse position and key-presses to allow interactive exploration of the image space. I save only the images that appeal. Note that at this point I am producing only static images (for example, those illustrating the p3m wallpaper page and similar pages for other symmetry experiments).

Since successive image saves write new lines to the same CSV file, at the end of an exploratory session, in which I have saved perhaps ten or so "key frames" that appeal, I have a table with ten lines listing all the control parameters. I can now use this list to generate a much longer list of specifications for perhaps several thousand frames. The process is therefore:

  1. Whenever I save an image (assumed to be "interesting") from my static exploration program, I also save a comma-separated-values (CSV) file containing the parameter vector associated with the image (using the standard library saveTable() method in which the parameter vector appears as a table row).
  2. I have written an off-line interpolation script which takes a list of such files, each one associated with a time-value (denoting seconds from the beginning of the clip) at which that key-frame should be positioned in the video clip. (I use simple linear interpolation - though there are arguments in favour of using more complex algorithms - such as spline interpolation - that produce smoother variations in parameter values. An experiment for the future.)
  3. The output from my script is another CSV in the same format as that originally written for my static images, but this time containing many lines of parameter vectors representing the transformation between the key frames. Assuming that 25 frames a second are required in the final video, the interpolation script generates as many intermediate rows (i.e. parameter vectors) as are required to sit between the time-indexed key-frames.
  4. My Processing image transformation program can then be started in a "replay" mode, where the CSV interpolated parameter file is read (using the loadTable() method) and then each time draw() is called, successive rows are used to define the parameter vector which control each successive image transformation, which is then saved in the normal way using saveFrame().

It now becomes rather easy to adjust the exact timing in the final video between these key frames (perhaps to synchronise with music - experiments in progress!). Although it is not so easy to get around the need to regenerate thousands of images after adjustments, this is an entirely automatic process with very predicable results. 

 

 

 

Breadcrumbs