This last blog in the Movie Vision App series, following on from The Movie Vision App: Part 1 and The Movie Vision App: Part 2, will discuss two final movie effect filters.
This is the most intriguing and complex filter in the Movie Vision demonstration. The camera preview image is replaced by a grid of small characters (primary Japanese Kana). The characters are coloured varying shades of green reminiscent of old computer displays. Additionally, the brightness is also manipulated to create the appearance of some characters ‘falling’ down the image. The overall impression is that the image is entirely composed of green, computer-code like characters.
… //Run the WhiteRabbitScript with the RGB camera input allocation. mWhiteRabbitScript.forEach_root(mWhiteRabbitInAlloc, mWhiteRabbitOutAlloc); //Make the heads move, dependant on the speed. for(int hp = 0; hp < mScreenWidth / mCharacterSize; hp++) { mHeadPos[hp]+=mSpeeds[mStrChar[hp]]; //If the character string has reached the bottom of the screen, wrap it back around. if(mHeadPos[hp] > mScreenHeight + 150) { mHeadPos[hp] = 0; mStrChar[hp] = mGen.nextInt(8)+1; mStrLen[hp] = mGen.nextInt(100)+50; mUpdate = true; } } //If a character string has reached the bottom, update the allocations with new random values. if(mUpdate) { mStringLengths.copyFrom(mStrLen); mStringChars.copyFrom(mStrChar); mUpdate = false; } …
“Follow the White Rabbit” excerpt from processing of each camera frame
The Java component of this image filter does the standard RenderScript set up, but also populates several arrays to use in mapping the image to characters. The number of columns and rows of characters is calculated and a random index set for each column. A set of header positions and string lengths is also randomly generated for each column. These correspond to areas that will be drawn brighter than the rest of the image, to give the impression of falling strings of characters. On the reception of each camera preview frame, the standard YUV to RGB conversion is performed. Then, the RenderScript image effect script’s references to the character, position and length arrays are updated. The script kernel is executed. Afterwards, the header positions are adjusted so that the vertical brighter strings appear to fall down the image (and wrap back to the top).
… static const int character1[mWhiteRabbitArraySize] = {0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1}; static const int character2[mWhiteRabbitArraySize] = {0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0}; …
“Follow the White Rabbit” RenderScript character setup
This is by far the most complicated RenderScript kernel in the Movie Vision app. The script file starts with eight statically defined characters from the Japanese Kana alphabet. These are defined as 6x6 arrays. The first line of the script execution is a conditional statement – the script only executes on every eighth pixel in both the x and y direction. So, the script executes ‘per character’ rather than ‘per pixel’. As we use 6x6 characters, this gives a one pixel border to each character. The output colour for the current position is set to a default green value, based on the input colour. The character index, header position and length values are retrieved from the arrays managed by the Java class. Next, we determine if the character corresponding to the current pixel is in our bright ‘falling’ string, and adjust the green value appropriately: brightest at the head, gradually fading behind and capped at a lower maximum value elsewhere. If the current character position isn’t at the front of the falling string, we also pseudo randomly change the character to add a dynamic effect to the image. Next, some basic skin tone detection is used to further brighten the output if skin is indeed detected. Finally, the output values for all pixels in the current character position are set.
… //Sets the initial green colour, which is later modified depending on the in pixel. refCol.r = 0; refCol.g = in->g; refCol.b = in->g & 30; … //If the Y position of this pixel is the same as the head position in this column. if(y == currHeadPos) refCol.g = 0xff; //Set it to solid green. //If the character is within the bounds of the falling character string for that column, make it darker the further away //from the head it is. else if((y < currHeadPos && y >= (currHeadPos - currStringLength)) || (y < currHeadPos && (currHeadPos - currStringLength) < 0)) refCol.g = 230 - ((currHeadPos - y)); else if(refCol.g > 150) //Cap the green at 150. refCol.g -= 100; else refCol.g += refCol.g | 200; //For every other character, make it brighter. //If the current character isn't the head, randomly change it. if(y != currHeadPos) theChar += *(int*)rsGetElementAt(stringChars, (y/mWhiteRabbitCharSize)); //Basic skin detection to highlight people. if(in->r > in->g && in->r > in->b) { if( in->r > 100 && in->g > 40 && in->b > 20 && (in->r - in->g) > 15) refCol.g += refCol.g & 255; } … //Loop through the binary array of the current character. for(int py = 0; py < mWhiteRabbitCharSize; py++){ for(int px = 0; px < mWhiteRabbitCharSize; px++){ out[(py*mWidth)+px].r = 0; out[(py*mWidth)+px].g = 0; out[(py*mWidth)+px].b = 0; if(theChar == 1) { if(character1[(py*(mWhiteRabbitCharSize))+px] == 1) out[(py*mWidth)+px] = refCol; }else if(theChar == 2) { if(character2[(py*(mWhiteRabbitCharSize))+px] == 1) out[(py*mWidth)+px] = refCol; …
Excerpts of “Follow the White Rabbit” Renderscript Kernel root function
This filter mimics a sonar vision effect. Part of this is a simple colour mapping to a blue toned image. In addition, areas of the image are brightened relative to the amplitude of sound samples from the microphone.
… mRecorder = new MediaRecorder(); mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mRecorder.setAudioChannels(1); mRecorder.setAudioEncodingBitRate(8); mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); mRecorder.setOutputFile("/dev/null"); try { mRecorder.prepare(); mRecorder.start(); mRecorder.getMaxAmplitude(); mRecording = true; } catch (IOException ioe){ mRecording = false; } …
“Why so serious?” setting up the microphone
The Java side of this filter does the standard configuration for a RenderScript kernel. It also sets up the Android MediaRecorder to constantly record sound, but dumps the output to /dev/null. A set of look-up tables, similar to the ‘Get to the chopper’ filter, are used to do the colour mapping. References to these are passed to the script. For each camera preview frame, the maximum sampled amplitude since the last frame and a random x and y position are passed to the RenderScript kernel. The image is converted to RGB and then the image effect kernel is executed.
… //If the current pixel is within the radius of the circle, apply for 'pulse' effect colour. if (((x1*x1)+(y1*y1)) < (scaledRadius*scaledRadius)){ dist = sqrt((x1*x1)+(y1*y1)); if (dist < scaledRadius){ effectFactor = (dist/scaledRadius) * 2; lightLevel *= effectFactor; blue -= lightLevel; } } //Lookup the RGB values based on the external lookup tables. uchar R = *(uchar*)rsGetElementAt(redLUT, blue); uchar G = *(uchar*)rsGetElementAt(greenLUT, blue); uchar B = *(uchar*)rsGetElementAt(blueLUT, blue); //Clamp the values between 0-255 R > 255? R = 255 : R < 0? R = 0 : R; G > 255? G = 255 : G < 0? G = 0 : G; B > 255? B = 255 : B < 0? B = 32 : B; //Set the final output RGB values. out->r = R; out->g = G; out->b = B; out->a = 0xff; } ...
“Why So Serious?” RenderScript Kernel root function
The RenderScript kernel calculates a brightness, radius and offset for a ‘pulse’ effect based on the amplitude and position passed to it. If the current pixel is within the pulse circle, it is brightened considerably. The output colour channels for the pixel are then set based on the lookup tables defined in the Java file.
Can you guess which movies inspired “Follow the White Rabbit” and “Why So Serious?” ?
At the beginning of this blog series we stated that the Movie Vision app was conceived as a demonstration to highlight heterogeneous computing capabilities in mobile devices. Specifically, we used RenderScript on Android to show the GPU Compute capabilities of ARM® Mali™ GPU technology. As a proof of concept and a way to explore one of the emerging GPU computing programming frameworks, Movie Vision has been very successful: RenderScript has proven to be an easy to use API. It is worth noting that it is highly portable, leveraging both ARM CPU and GPU technology. The Movie Vision App explored a fun and entertaining use-case, but it is only one example of the potential of heterogeneous approaches like GPU Compute.
We hope you have enjoyed this blog series, and that this inspires you to create your own applications that explore the capabilities of ARM technology.
This work by ARM is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. However, in respect of the code snippets included in the work, ARM further grants to you a non-exclusive, non-transferable, limited license under ARM’s copyrights to Share and Adapt the code snippets for any lawful purpose (including use in projects with a commercial purpose), subject in each case also to the general terms of use on this site. No patent or trademark rights are granted in respect of the work (including the code snippets).