Thursday, August 28, 2008

A15 – Color Image Processing

A colored digital image is an array of pixels each having red, green and blue light overlaid in various proportions, Per pixel, the color captured by a digital color camera is an integral of the product of the spectral power distribution of the incident light source S(l), the surface reflectance r(l) and the spectral sensitivity of the camera h(l).

White Balancing
This setting allows the user to select the white balancing constants appropriate for the capturing conditions.

White Balance Settings
Daylight - not all cameras have this setting because it sets things as fairly ‘normal’ white balance settings.
Incandescent/ Tungsten - this mode is usually symbolized with a little bulb and is for shooting indoors, especially under tungsten (incandescent) lighting (such as bulb lighting). It generally cools down the colors in photos.
Flourescent - this compensates for the ‘cool’ light of fluorescent light and will warm up your shots.
Cloudy - this setting generally warms things up a touch more than ‘daylight’ mode.

There are two popular algorithms of achieving automatic white balance. The first is Reference White Algorithm and the second is the Gray World Algorithm.

In the Reference White Algorithm, you capture an image using an unbalanced camera and use the RGB values of a known white object as the divider.

In the Gray World Algorithm, it is assumed that the average color of the world is gray.

Using a cloudy white balance, the color of the original image is yellowish. Using reference white, the image became whiter. And when we use gray world, the image is also whiter than the original but it is darker than the image using reference white.

Using an incandescent white balance, the image is bluish in color. Using reference white, the image became whiter and brighter than the original. Using gray world, the image is darker but whiter than the original.

Using daylight settings, the image became whiter in the reference white than the gray world. It also becomes brighter using reference white.

Using flourescent white balance. The image become white using both reference white and gray world. But i think it is brighter using gray world.
Using incandescent white balance, the image is bluish. After implementing reference white, the color of the leaves became green and the paper became white. After implementing gray world, the leaves became green, and the paper is close to gray.

I think that the reference white algorith is better than the gray world because of the results in the images.

-- code --
//white balance

stacksize(20000000);
image = imread('rix.jpg');
im = imread('rix-white.jpg');

RW = mean(im(:,:,1));
GW = mean(im(:,:,2));
BW = mean(im(:,:,3));

new = zeros(image);
new(:,:,1) = image(:,:,1)./RW;
new(:,:,2) = image(:,:,2)./GW;
new(:,:,3) = image(:,:,3)./BW;

maxall = max(max(max(new)));
newim = new./maxall;

imwrite(newim,'rix-new-white.jpg');

//gray world

stacksize(20000000);
image = imread('rix1.jpg'C/3B
new = zeros(image);

new(:,:,1) = image(:,:,1)./mean(image(:,:,1));
new(:,:,2) = image(:,:,2)./mean(image(:,:,2));
new(:,:,3) = image(:,:,3)./mean(image(:,:,3));

maxall = max(max(max(new)));
newim = new./maxall;

imwrite(newim,'rix1-new-gray.jpg');

Acknowledgements
Activity 15 manual
http://digital-photography-school.com/blog/introduction-to-white-balance/

Benj - for the help with the codes

Grade:
10/10
- because I think I implemented the algorithms well. The color of the images were improved. :)

Tuesday, August 26, 2008

A14 – Stereometry

Objective: To reconstruct a 3D object.

Stereo imaging is the technique we will use in this experiment and it is inspired by how our two eyes allow us to discriminate depth.

Thursday, August 7, 2008

A13 – Photometric Stereo

We can estimate the shape of the surface by capturing multiple images of the surface with the sources at different locations. The information about the surface will be coded in the shadings obtained from the images.

Let there be N sources in 3-d space. We can define a matrix
where each row is a source, each column is the x,y,z component of the source.

Procedure:
1. Load the matlab file photos.mat which contains 4 images I1, I2, I3,
and I4. The images are synthetic spherical surfaces illuminated by a far
away point source located respectively at
V1 = {0.085832, 0.17365, 0.98106}
V2 = {0.085832, -0.17365, 0.98106}
V3 = {0.17365, 0, 0.98481}
V4 = {0.16318, -0.34202, 0.92542}
Use loadmatfile in Scilab.

Sperical surfaces
2. Compute the surface normals using Equation 10 and 11.

Equation 10
To get the normal vector we simplify normalize g by its length
Equation 11

3. From the surface normals compute the elevation z=f(u,v) and display a 3D plot of the object shape.
Once the surface normals (nx, ny, nz) are estimated using photometric stereo, they are related to the partial derivative of f as
The surface elevation z at point (u,v) is given by f(u,v) and is evaluated by a line integral

This is the resulting image.
--
codes
loadmatfile('Photos.mat',['I1','I2','I3', 'I4']);
scf(1)
subplot(141);imshow(I1,[]);
subplot(142);imshow(I2,[]);
subplot(143);imshow(I3,[]);
subplot(144);imshow(I4,[]);

V = [0.085832 0.17365 0.98106; 0.085832 -0.17365 0.98106; 0.17365 0 0.98481; 0.16318 -0.34202 0.92542];
I = [I1(:)';I2(:)';I3(:)';I4(:)'];

g = inv(V'*V)*V'*I;
[A,B] = size(g);
for i=1:A;
for j=1:B;
c(j)=sqrt((g(1,j)**2)+(g(2,j)**2)+(g(3,j)**2));
c = c+ 0.000000001;
end
end

n(1,:) = g(1,:)./c';
n(2,:) = g(2,:)./c';
n(3,:) = g(3,:)./c';


dfx= -n(1,:)./(n(3,:)+0.000000001);
dfy= -n(2,:)./(n(3,:)+0.000000001);

imx = matrix(dfx,128,128);
imy = matrix(dfy,128,128);

fx =cumsum(imx,2);
fy =cumsum(imy,1);

Image = fx+fy;

mesh(Image)

Acknowledgements:
April, Lei, Mark Leo and Aiyin
- for helping me

Grade
10/10
-since I was able to find the correct 3D plot.

 
template by suckmylolly.com