CLOUDED PLAIN
A stage. A grid of pixels. Where stripes dance The stage keeps time. It has a tick-value. We render its dance as an animation and every frame of that animation is a tick of the clock. Stripes sweep across the grid, north-to-south, east-to-west, etc. Overlapping, intersecting, etc. A stripe is a rectangle. It is created offstage, traverses the stage, and is then destroyed.
A stripe is created by a Stripe Generator. A stripe generator is created, generates stripes according to its design, and then is destroyed. The stage creates stripe generators arbitrarily, periodically or according to whatever arbitrary logic.
We have a number of different Stripe Generators. Some create a single stripe. Some create a pattern of stripes.
RENDERING
A stripe has a value parameter. An integer. Value is a function of an array of integers. We mod the stage's tick against the length of the array to get an index within the array, which gives us the value of the stripe at that tick.
When we render the pixels of the plain each pixel is covered by a number of stripes. Each stripe has a value. Sum the values to get the pixel's value. Use that value as an index in a palette to get the color of that pixel.
This pattern of intersecting stripes results in a pattern of rectangles. Each rectangle has a location, height, width, area, and color. These are translated into various sound-generation parameters (maybe with a bit of noise for fatter chorussing) thus we get the sound generated by that rectangle. Sum the sound generated by all rectangles to get the sound of the plain.
####################################### #######################################
make frames and sound
convert frames to video via
ffmpeg -r 60 -f image2 -s 1920x1080 -i %05d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p test.mp4
put the video and the sound together in openshot
export to 720p med quality mp4 264
60 fps
high quality?
IN A NUTSHELL It's a stage (plain) where cell-pattern generators (clouds) dance The stage handles cloud mixing by summing cloud-presence at each cell. And then we render that summed value to a color or strobe
We have a graphic renderer. It converts the plain into an image by using cell values as indices in a color array. simple
We have a sound renderer. It converts the plain to a sound, interpreting shapes and color as pitch and volume and waveform and such.
Our graphic animation rate is 60fps at the moment. We might change it later (120fps?) but probably not.
Our sound sample rate is some multiple of 60 in the vicinity of 44.1khz. Something with lots of factors for each frame-sound-sample-duration. Maybe 43200 (60720. 720=123456)
So when we render graphics we render a frame, increment time, render another frame and so on.
And when we render sound we generate a 1/60th of a second sound in a sound array. that would be like
So the subsystems are :
plain cloud graphic renderer sound renderer cloud generator every tick of the clock (every frame that is. every 1/60 sec) we query the cloud generator and maybe it generates some clouds. It's a conditional thing, based upon time, present cloud population and etc. The clouds enter the stage, do their dance, and exit.
Framework for doing a certain kind of av composition.
Clouds drifting in layers over plain
Clouds are cell-manifesting agents Plain is field of cells.
Like Blanketflower but better. Suaver design. Faster. Has sound. Smoother.
Cloud
void setPlain(plain)
manifest Derive t from plain Add values for all cells in plain where cloud is present. Usually that value is 1. Sometimes 2 if we want to illustrate area-overlap or a secon-level effect or something.
boolean isFinished Used to tell the system to discard this cloud because it's done
clouds are time-parametric cell pattern generators
plain is an array of cells
clouds are graphically mixed by summing manifestations in cells and then using that value to get a color from a palette-array
sound is derived by examining plain. Get all the colored shapes. Translate colors into tones. ex : shape color = waveform. area = frequency. Closeness to field center = volume. Gangliness = vibrato
It's a tiled-image and sculpted-tone audio-video mixer-generator
The plain is a field of tiles
The clouds are tile-value expressors They are instantiated, do their dance, then are discarded
The cloud expresses a pattern of tile values, to be summed for each tile. That value means the tile's color. The audio is derived from the various shapes formed by colored masses of tiles. (or however we want to do it)
We have 2 rates to consider frame rate and audio sample rate both will be derived from a fundamental tick. Tick frequency will be some really high audio sample rate I guess.
try different resolutions for the plain. Fine and coarse.
can clouds interact? spawn more clouds?
we use 60 frames per second
graphical movement always occurs at 1 tick of movement at a time either every frame, or every 3rd frame, or whatever. Depending on speed. ie : object.x = foo, foo+1, foo+2 etc.
IE 1 square tile at a time
If we use a 60 ticks per frame (ie 60 fps and tickrate are the same and soundsamplerate can be some multiple of that.) then we get a nice set of usable movement speeds (they are the factors of 60) 1 (ie 1 action per tick) 1/2 (1 action ever other tick) 1/3 (1 action every 3 ticks) 1/4 ... 1/5 1/6 1/12 etc
One type of cloud generator will generate a LOGO
So t he logo gets injected into the mix
we sell it like this
client specifies palette and logo
I present client with a dozen videos. (fuck smooth looping. Put a bracket in there or something)
profit
make frames and sound
convert frames to video ############## MKV IS BETTER ################# ffmpeg -r 60 -f image2 -s 1280x720 -i %05d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p video.mkv ##############################################
connect audio to video ############################################## ffmpeg -i video.mkv -i audio.wav -c copy audiovideo.mkv ##############################################