Check it out! Here’s the sketch.
Introduction
In this project, the goal was to collaborate in pairs to manipulate an image or video at the pixel level, focusing purely on visual transformations without sound. The image was designed to evolve over the span of a minute, encouraging us to consider what is revealed or lost during the transformation. We were tasked with using color properties strategically to guide viewers' attention.
From the start, Devan and I knew we wanted to focus on what’s “not seen” and negative space, and drew on that as inspiration. I experimented with the two-tone mirror assignment we did for class to try to embody this idea, and wanted to elaborate further in this project.
Brainstorming
During our initial brainstorming session, my partner and I explored different approaches to create a unique visual experience. We initially considered using the `createCapture()` function in p5.js to capture a live feed and manipulate it in real time. However, we quickly realized that capturing and replicating a minute-long experience twice would be challenging in a classroom setting, where consistency was key. This led us to pivot away from real-time capture.
Despite this change, we both wanted to explore negative space—highlighting the idea of showing what’s absent or hidden within the original image. We agreed that manipulating color values would be a powerful way to achieve this effect.
With my background in photography and my partner’s experience in poetry, we thought about incorporating words into the image manipulation process, using text as a means to "reveal by concealing." The idea was to selectively remove parts of the image or text, uncovering different elements of the scene. Ultimately, we decided not to use words, as they felt secondary to our visual focus. However, we agreed to use one of my photos as our base image, one that resonated with both of us.
Our initial approach to the project involved manipulating the RGB values of each pixel by adjusting the alpha value. We began by creating three image objects: `rsparkle`, `gsparkle`, and `bsparkle`, with the original image object named `sparkle`. The idea was to load pixel values from `sparkle` into each of these objects, then manipulate the alpha values to control opacity and create an additive effect that would reveal the image over time.
However, this method was ultimately ineffective. Since our alpha values were set below 255, we were unable to achieve the desired opaque result. Our goal was to use varying opacities to progressively reveal the final image, with continuously changing RGB values. When this approach didn’t yield the intended effect, we decided to explore a new method.
Our pivot led us to experiment with sorting pixels based on brightness. By organizing pixels by their brightness values, we were able to create a dynamic, evolving image where brightness levels corresponded to different visual transformations. Different brightness values controlled both the movement of pixels and the variation in RGB values, resulting in the effect we were aiming for.
In refining this approach, we realized we didn’t need three separate image objects. Instead, we could achieve our result with a single image object and arrays to store the necessary data for RGB manipulation and brightness-based sorting. This streamlined our code and made it more efficient.
Final Product
Our final result is a dynamic image that’s continuously sorted by brightness values. Each pixel’s RGB values oscillate over time, moving along a sine curve at varying speeds based on brightness. This produces a visually engaging effect, where the image gradually shifts and evolves while preserving aspects of the original. Over time, the RGB values align closer to the original color values, creating a sense of return to the source image, but only for a moment before changing again.
Here’s the final code:
let sparkle;
let originalColors = []; // Store the original colors of each pixel
let colorOffsets = []; // Store an offset for each pixel to animate color changes
// Variable for changing the HSB values
// off for offset
let off = 0;
let offspeed = 0.1;
function preload() {
sparkle = loadImage("sparkle.png");
}
function setup() {
createCanvas(windowWidth, windowHeight);
background(0);
sparkle.loadPixels();
// Initialize arrays to store original colors and offsets for each pixel
for (let i = 0; i < sparkle.pixels.length; i++) {
// Store the original color for each pixel
originalColors[i] = sparkle.pixels[i];
// Initialize color offset for animation (use random for variation)
colorOffsets[i / 4] = random(0, 1); // One offset per pixel (1 per 4 values in array)
}
}
function draw() {
colorMode(RGB);
// Center the image on the canvas
translate((width - sparkle.width) / 2, (height - sparkle.height) / 2);
// Load pixels to manipulate them directly
sparkle.loadPixels();
// Move the hue
off += offspeed;
if (off < 0 || off > 350) {
offspeed *= -1;
}
// Loop through every pixel in the image by x and y coordinates
for (let x = 0; x < sparkle.width; x++) {
for (let y = 0; y < sparkle.height; y++) {
let i = (x + y * sparkle.width) * 4; // set index for each pixel
// store the original color values
let rOrig = originalColors[i];
let gOrig = originalColors[i + 1];
let bOrig = originalColors[i + 2];
// store brightness values of original image --> use to sort
let brightnessValue = brightness(color(rOrig, gOrig, bOrig));
// conditionals for rates of change --> we use this value to change colorOffsets[]
let sinFood;
if (brightnessValue > 75) {
sinFood = 0.02/2;
} else if (brightnessValue > 50) {
sinFood = 0.015/2;
} else {
sinFood = 0.01/2;
}
// change animation rates of different pixels based on their brightness
colorOffsets[i / 4] += sinFood;
// Calculate oscillating factor between -1 and 1 for each color channel --> we use sin() to make the movement stable
let oscillate = sin(colorOffsets[i / 4]);
// variable for oscillation modifier
let oscMod = 100;
/*
NOTE: sinFood is the rate of change. oscillate is the rate of change of the rate of change. oscMod is the intensity of the color manipulation.
*/
// Modify pixel colors based on brightness and oscillation
if (brightnessValue > 25 && brightnessValue < 50) {
// Shift red up and green/blue down
sparkle.pixels[i] = rOrig + oscillate * oscMod; // Redder
sparkle.pixels[i + 1] = gOrig - oscillate * oscMod; // Less green
sparkle.pixels[i + 2] = bOrig - oscillate * oscMod; // Less blue
} else if (brightnessValue > 50 && brightnessValue < 75) {
// Increase blue and reduce red/green for cooler shades
sparkle.pixels[i] = rOrig - oscillate * oscMod;
sparkle.pixels[i + 1] = gOrig - oscillate * oscMod;
sparkle.pixels[i + 2] = bOrig + oscillate * oscMod; // Bluer
} else {
// Adjust colors proportionally for other brightness levels
sparkle.pixels[i] = rOrig + oscillate * oscMod;
sparkle.pixels[i + 1] = gOrig + oscillate * oscMod;
sparkle.pixels[i + 2] = bOrig + oscillate * oscMod;
}
}
}
// Update the pixels array to reflect changes
sparkle.updatePixels();
// Display the image on the canvas
image(sparkle, 0, 0);
}
Looking ahead, we’re excited to explore further possibilities. We’re particularly interested in integrating a live webcam feed, allowing for real-time manipulation based on external input. One feature we aimed to implement but couldn’t finalize was adjusting the overall opacity of the image in response to the brightness levels detected by the webcam. This remains a goal for future iterations, as it would add a compelling interactive element to the piece.