We have all done it..you point your camera at your loved one standing by a fantastic view, just as you press the shutter, some idiot walks in front of your camera and ruins your picture, or perhaps you were so busy watching the birdie or saying cheese, you failed to notice a telegraph pole sticking out from your subject's head,..and click..there, instant disfigurement, a ruined photo, another wasted opportunity, quick ! press delete or retake the photo, but the decisive moment is lost forever..or has it?
Well, there may be a fix for all these 'bad' photos for those using digital cameras around the corner soon, as the article below from BBC online elucidates. Photo-manipulation for the masses may be a reality, but frankly, unless it is a do-or-die photo, it's easier to re-take. Personally I do not see this as particularly useful for holiday snaps, which I think is what the developers of the program are aiming at.
Photo tool could fix bad images By Mark Ward Technology correspondent, BBC News website, San Diego |
This is the original image with a roof spoiling the view... |
Digital photographers could soon be able to erase unwanted elements in photos by using tools that scan for similar images in online libraries.
Research teams have developed an algorithm that uses sites like Flickr to help discover light sources, camera position and composition in a photo.
Using this data the tools then search for objects, such as landscapes or cars, that match the original.
The teams aim to create image libraries that anyone can use to edit snaps.
Stage one: The roof is isolated and the algorithm searches for similar scenes |
James Hays and Alexei Efros from Carnegie Mellon University have developed an algorithm to help people who want to remove bits of photographs.
The parts being removed could be unsightly lorries in the snaps of the rural idyll where they took a holiday or even an old boyfriend or girlfriend they want to rub out from a photograph.
To find suitable matching elements, the research duo's algorithm looks through a database of 2.3 million images culled from Flickr.
"We search for other scenes that share as closely as possible the same semantic scene data," said Mr Hays, who has been showing off the project at the computer graphics conference Siggraph, in San Diego.
In this sense "semantic" means composition. So a snap of a lake in the foreground, hills in a band in the middle and sunset above has, as far as the algorithm is concerned, very different "semantics" to one of a city with a river running through it.
Stage two: It compares photos online to find a matching scene |
The broad-based analysis cuts out more than 99.9% of the images in the database, said Mr Hays. The algorithm then picks the closest 200 for further analysis.
Next the algorithm searches the 200 to see if they have elements, such as hillsides or even buildings, the right size and colours for the hole to be filled.
The useful parts of the 20 best scenes are then cropped, added to the image being edited so the best fit can be chosen.
Early tests of the algorithm show that only 30% of the images altered with it could be spotted, said Mr Hays.
The other approach aims to use net-based image libraries to create a clip-art of objects that, once inserted into a photograph, look convincing.
Stage three: The finished picture has the roof removed and boats in a bay added |
"We want to generate objects of high realism while keeping the ease of use of a clip art library," said Jean-Francois Lalonde of Carnegie Mellon University who led the research.
To generate its clip art for photographs the team has drawn on the net's Label Me library of images which has many objects, such as people, trees and cars, cut out and tagged by its users.
The challenge, said Mr Lalonde, was working out which images in the Label Me database will be useful and convincing when inserted into photographs.
The algorithm developed by Mr Lalonde and his colleagues at Carnegie Mellon and Microsoft Research analyses scenes to find out the orientation of objects and the sources of light in a scene.
"We use the height of the people in the image to estimate the height of the camera used to take the picture," he said.
The light sources in a scene are worked out by looking at the distribution of colour shades within three broad regions, ground, vertical planes and sky, in the image.
With knowledge about the position, pitch and height of the camera and light sources the algorithm then looks for images in the clip art database that were taken from similar positions and with similar pixel heights.
The group has created an interface for the database of photo clipart so people can pick which elements they want to add to a scene.