Jumat, 10 Desember 2010

| 0 komentar |

HOW A DIGITAL CAMERA WORKS

While it doesn’t take a lot of know-how to operate a fully automatic camera, serious photography—whether at the hobbyist or professional level—has always required a tremendous amount of craftsmanship and technical understanding. In “wet darkroom” work, everything from paper type to chemical mixtures to the amount of agitation applied to a particular chemical has a bearing on the quality of your final print. As such, skilled film photographers have to have an in- depth understanding of the nature of their papers, chemicals, and equipment.
Digital photography is no different. To fully exploit your camera’s capabilities— and to be able to effectively edit and improve your images—you must understand some of the fundamental principles and technologies behind digital imaging.
As mentioned in Chapter 1, “Introduction,” the only real difference between a digital camera and a film camera is that a digital camera does not use film to record an image. However, this one fundamental difference affects all of the other systems on the camera, from the lens to the light meter. Consequently, knowing some of the technical details of how a digital camera works will help you select the right camera and help you better understand how to make certain decisions when shooting.

SOMETHING OLD, SOMETHING NEW
Just like a film camera, your digital camera records an image by using a lens to focus light through an aperture and a shutter and then onto a focal plane. By opening or closing the aperture and by changing the amount of time the shutter is open, the photographer can control how the focal plane is exposed. As we’ll see later, exposure control allows the photographer to change the degree to which the camera “freezes” motion, how well the film records contrast and color saturation, and which parts of the image are in focus.
While a film camera has a piece of film sitting on the focal plane, in a digital camera, an image sensor is mounted on the focal plane. An image sensor is a special type of silicon chip that is light sensitive. Currently, there are two major types of image sensors available: the charge-coupled device (CCD), and the Complementary Metal Oxide Semiconductor (CMOS). CCDs are more popular than CMOS chips, but the role of both is the same. When you take a picture, the light falling on the image sensor is sampled, converted into electrical signals. After the image sensor is exposed, these signals are boosted by an amplifier and sent to an analog-to-digital converter that turns the signals into digits. These digits are then sent to an onboard computer for processing. Once the computer has calculated the final image, the new image data is stored on a memory card. (See Figure 2.1.)



FIGURE 2.1 Light passes into a digital camera, just as it would in a film camera. However, instead of hitting a piece of film, it is digitized by a computer chip and passed to an onboard computer to create an image.

Although the mechanics are simple to explain, to really understand how a digi- tal camera functions, you must know a little color theory.

A LITTLE COLOR THEORY
In 1869, James Clerk Maxwell asked photographer Thomas Sutton (the inventor of the SLR camera) to take three black-and-white photographs of a tartan ribbon. Maxwell wanted to test a theory he had about a possible method for creating color photographs. He asked Sutton to place a different filter over the camera for each shot: first, a red filter, then green, and then blue. After the film was developed, Maxwell projected all three black-and-white pictures onto a screen using three pro- jectors fitted with the same filters that were used to shoot the photos. When the im- ages were projected directly on top of each other, the images combined and Maxell had the world’s first color photo.
This process was hardly convenient. Unfortunately, it took another 30 years to turn Maxwell’s discovery into a commercially viable product. This happened in 1903, when the Lumière brothers used red, green, and blue dyes to color grains of starch that could be applied to glass plates to create color images. They called their process Autochrome, and it was the first successful color printing process.
In grammar school, you probably learned that you could mix primary colors to- gether to create other colors. Painters have used this technique for centuries, of course, but what Maxwell demonstrated is that, although you can mix paints to- gether to create darker colors, light mixes together to create lighter colors. Or, to use some jargon, paint mixes in a subtractive process (as you mix, you subtract color to create black), whereas light mixes in an additive process (as you mix, you add color to create white). Note that Maxwell did not discover light’s additive properties— Newton had done similar experiments long before—but Maxwell was the first to apply the properties to photography.



FIGURE 2.2 Red, green, and blue—the three additive primary colors of light—can be mixed together to create other colors. As you combine them, the resulting color gets lighter, eventually becoming white. Note also that where the colors overlap they create the secondary primary colors—cyan, magenta, and yellow. These are the primary colors of ink.

Your digital camera makes color images using pretty much the same process Maxwell used in 1860: it combines three different black-and-white images to create a full-color final image.
The image shown in Figure 2.3 is called an RGB image because it uses red, green, and blue channels to create a color image.



FIGURE 2.3 In a digital image, three separate red, green, and blue channels are combined to create a final, full-color picture.


This color theory is not just a trivial history lesson. Understanding that your full color images are composed of separate channels will come in very handy later, when you start editing. Very often, you’ll correct color casts and adjust your images by viewing and manipulating individual color channels.

You Say “Black and White,” I Say “Grayscale”
Although film photographers use the term black-and-white to denote an image that lacks color, in the digital world it’s better to use the term grayscale. As we saw in Figure 1.1, your computer can create an image that is composed of only black and white pixels. Consequently, it’s sometimes important to distinguish between an image that is made up of black and white pixels, and one that is made up of pixels of varying shades of gray.
In the century and a half since Maxwell’s discovery, many other ways of repre- senting color have been discovered. For example, another model called L*A*B color (also known as Lab color) uses one channel for lightness information, another channel for greenness or redness, and a third channel for blueness or yellowness. In addition, there is the cyan, magenta, yellow, and black (CMYK) model that printers use.
Each of these approaches is called a color model, and each model has a particular gamut, or range, of colors it can display. Some gamuts are more appropriate to cer- tain tasks than others are, and all are smaller than the range of colors your eye can perceive.
We’ll deal more with gamuts and color models in later chapters. For now, it’s important to understand that digital photos are made up of separate red, green, and blue channels that combine to create a color image.

HOW AN IMAGE SENSOR WORKS
George Smith and Willard Boyle were two engineers employed by Bell Labs. The story goes that one day in late October, the two men spent about an hour sketching out an idea for a new type of semiconductor that could be used for computer mem- ory and for the creation of a solid-state, tubeless video camera. The year was 1969, and in that hour, the two men invented the CCD.
Roughly a year later, Bell Labs created a solid-state video camera using Smith and Boyle’s new chip. Although their original intention was to build a simple cam- era that could be used in a video-telephone device, they soon built a camera that was good enough for broadcast television.
Since then, CCDs have been used in everything from cameras to fax machines. Because video cameras don’t require a lot of resolution (only half a million pixels or so), the CCD worked great for creating video-quality images. For printing pictures, though, you need much higher resolution—millions and millions of pixels. Conse- quently, it wasn’t until recently that CCDs could be manufactured with enough res- olution to compete with photographic film.

CCD VS. CMOS
Ninety to 95 percent of the digital cameras you’ll look at will use CCD image sensors. The rest will use a CMOS chip of some kind. What’s the difference? Because much more re- search has been put into CCD technology, it’s more prevalent. CMOS chips, though, are ac- tually much cheaper to produce than the difficult-to-make CCDs found in most cameras. CMOS chips also consume much less power than does a typical CCD, making for longer camera battery life and fewer overheating problems. CMOS also offers the promise of in- tegrating more functions onto one chip, thereby enabling manufacturers to reduce the number of chips in their cameras. For example, image capture and processing could both be performed on one CMOS chip, further reducing the price of a camera.
Although CMOS used to have a reputation for producing rough images with inferior color, Canon®’s excellent EOS series of digital SLRs have shown that CMOS can be a viable alternative to CCDs.
Both chips register light in the same way, and for the sake of this discussion, the two technologies are interchangeable. In the end, image sensor choice is irrelevant as long as the camera delivers an image quality you like.


Counting Electrons
Photographic film is covered with an emulsion of light-sensitive, silver-laden crys- tals. When light hits the film, the silver atoms clump together. The more light there is, the bigger the clumps. In this way, a piece of film records the varying amounts of light that strike each part of its surface. A piece of color film is actually a stack of three separate layers—one sensitive to red, one to green, and one to blue.
To a degree, you have Albert Einstein to thank for your digital camera, because he was the first to explore the photoelectric effect. It is because of the photoelectric ef- fect that some metals release electrons when exposed to light. (Einstein actually won the 1921 Nobel Prize in physics for his work on the photoelectric effect, not for his work on relativity or gravity, as one might expect.)
The image sensor in your digital camera is a silicon chip that is covered with a grid of small electrodes called photosites, one for each pixel. (See Figure 2.4.)
Before you can shoot a picture, your camera charges the surface of the CCD with electrons. Thanks to the photoelectric effect, when light strikes a particular photosite, the metal in that site releases some of its electrons. Because each photo- site is bounded by nonconducting metal, the electrons remain trapped. In this way, each photosite is like a very shallow well, storing up more and more electrons as more and more photons hit. After exposing the CCD to light, your camera simply has to measure the voltage at each site to determine how many electrons are there and, thus, how much light hits that particular site. (As was discussed in Chapter 1, this process is called sampling.) This measurement is then converted into a number by an analog-to-digital converter.



FIGURE 2.4 The sensor from a Nikon D70 has an imaging area of 23.7 mm by 15.6 mm.

Most cameras use either a 12-bit or 14-bit analog-to-digital converter. That is, the electrical charge from each photosite is converted into a 12- or 14-bit number. In the case of a 12-bit converter, this produces a number between 0 and 4,096; with a 14-bit converter, you get a number between 0 and 16,384. Note that an analog-to- digital converter with a higher bit depth doesn’t give your CCD a bigger dynamic range. The brightest and darkest colors it can represent remain the same, but the extra bit depth does mean that the camera will produce finer gradations within that dynamic range. As you’ll see later, how many bits get used in your final image de- pends on the format in which you save the image.
The term charge-coupled device (CCD) is derived from the way the camera reads the charges of the individual photosites. After exposing the CCD, the charges on the first row of photosites are transferred to a read-out register where they are amplified and then sent to an analog-to-digital converter. Each row of charges is electrically coupled to the next row so that, after one row has been read and deleted, all of the other rows move down to fill the now empty space. (See Figure 2.5.)



FIGURE 2.5 Rows of photosites on a CCD are coupled together. As the bottom row of photosites is read off the bottom of the CCD, all of the rows above it shift down. This is the coupled in charge-coupled device.

After all the rows of photosites have been read, the CCD is recharged with elec- trons and is ready to shoot another image.
Photosites are sensitive only to how much light they receive; they know nothing about color. As you’ve probably already guessed, to see color your camera needs to perform some type of RGB filtering similar to what James Maxwell did. There are a number of ways to perform this filtering, but the most common is through a single array system, sometimes referred to as a striped array.

Arrays
Consider the images in Figure 2.6. If asked to fill in any “missing” pixels in Figure 2.6a, you’d probably say, “What
are you talking about?” If asked to fill in any “missing” pixels in Figure 2.6b, though, you probably would have no trouble creating the image in Figure 2.7.



FIGURE 2.6 Although you have no idea what pixels belong in Figure 2.6a, you can probably hazard a guess as to what the missing pixels are in Figure 2.6b.

You would know which pixels you needed to fill in based on the other pixels that were already in the image. In other words, you would have interpolated the new pixels based on the existing information. You might have encountered interpolation if you’ve ever resized a photograph using an image-editing program such as Photo- shop. To resize an image from 2,048 × 1,536 pixels to 4,096 × 3,072 pixels, your image editor has to perform many calculations to determine what color all of those new pixels should be. (Obviously, in this example, your ability to interpolate is based on your ability to recognize a familiar icon—the happy face. An image editor knows nothing about the content of an image, of course, and must interpolate by carefully examining each of the pixels in an image to determine what the colors of any new additional pixels should be.)



FIGURE 2.7 If asked to fill in the “missing” pixels in Figure 2.6b, you would probably come up with an image something like this.

A typical digital camera uses a form of interpolation to create a color image. As we saw in the previous section, the image sensor in your camera is able to create a grayscale image of your subject by measuring the amount of light that strikes each part of the image sensor. To shoot color, your camera performs a variation of the same type of RGB filtering Maxwell used in 1869. Each photosite on your camera’s image sensor is covered by a filter—red, green, or blue. This combination of filters is called a color filter array, and most image sensors use a filter pattern like the one shown in Figure 2.8, called the Bayer Pattern.



FIGURE 2.8 To see color, alternating pixels on an image sensor are covered with a different colored filter. The color filter array shown here is called the Bayer Pattern.

With these filters, the image sensor can produce separate, incomplete red, green, and blue images. The images are incomplete because the red image, for ex- ample, is missing all of the pixels that were covered with a blue filter, whereas the blue filter is missing all of the pixels that were covered with a red filter. Both the red and blue images are missing the vast number of green-filtered pixels.
A sophisticated interpolation method is used to create a complete color image. Just as you used the partial pixel information in Figure 2.6b to calculate the missing pixels, your digital camera can calculate the color of any given pixel by analyzing all of the ad- jacent pixels. For example, if you look at a particular pixel and see that the pixel to the immediate left of it is a bright red pixel, the pixel to the right is a bright blue pixel, and the pixels above and below are bright green, then the pixel in question is probably white. Why? As Maxwell showed, if you mix red, green, and blue light together, you get white light. (By the way, if you’re wondering why there are so many more green pixels than red or blue pixels, it’s because the eye is most sensitive to green. Conse- quently, it’s better to have as much green information as possible.)
This process of interpolating is called demosaicing, and different vendors employ different approaches to the demosaicing process. For example, many cameras look at only immediately adjacent pixels, but Hewlett-Packard cameras analyze a region up to 9 × 9 pixels. The Fuji® SuperCCD eschews the grid pattern of square photosites in favor of octagonal photosites arranged in a honeycomb pattern. Such a scheme requires even more demosaicing to produce rectangular image pixels, but Fuji claims this process yields a higher resolution. Differences in demosaicing algorithms are one factor that makes some cameras yield better color than others.
Some cameras use a different type of color filter array. Canon, for example, often uses cyan, yellow, green, and magenta filters on the photosites of their image sensors. Because it takes fewer layers of dye to create cyan, yellow, green, and magenta filters than it does to create red, green, and blue filters, more light gets through the CYGM filter to the sensor. (Cyan, yellow, and magenta are the primary colors of ink, and therefore don’t need to be mixed to create the color filters; hence, they aren’t as thick.) More light means a better signal-to-noise ratio, which produces images with less noise.
As another example, Sony sometimes uses red, green, blue, and emerald filters. They claim these filters give a wider color gamut, yielding images with more accu- rate color. Dissenters argue that this approach results in bright areas of the image having a cyan color cast.
Image sensors are often very small, sometimes as small as 1/4 or 1/2 inch (6 or 12 mm, respectively). By comparison, a single frame of 35 mm film is 36 × 23.3 mm. (See Figure 2.9.) The fact that image sensors can be so small is the main reason why digital cameras can be so tiny.
By packing more and more photosites onto an image sensor, chipmakers can in- crease the sensor’s resolution. However, there is a price to pay for this. To pack more photosites onto the surface of the chip, the individual sites have to be made much smaller. As each site gets smaller, its capability to collect light is compromised be- cause it simply doesn’t have as much physical space to catch passing photons. This limitation results in a chip with a poor signal-to-noise ratio; that is, the amount of



FIGURE 2.9 Most CCDs are very small, particularly when compared to the size of 35 mm film.

good data the chip is collecting—the signal—is muddied by the amount of noise— noise from the camera’s electronics, noise from other nearby electrical sources, noise from cosmic rays raining down from space—the chip is collecting.
In your final image, this signal-to-noise confusion can manifest as grainy pat- terns in your image—visible noise like what you see on a staticky TV channel—or other annoying artifacts. (See the noise example in Figure 4.2.)
To improve the light-collecting capability of tiny photosites, some chipmakers po- sition tiny microlenses over each photosite. These lenses focus the light more tightly into the photosite in an effort to improve the signal-to-noise ratio. However, these lenses can cause problems of their own in the form of artifacts in your final image.
Image sensors suffer from another problem that film lacks. If too much light hits a particular photosite, it can spill over into adjacent photosites. If the camera’s soft- ware isn’t smart enough to recognize that this has happened, you will see a blooming artifact—smearing colors or flared highlights—in your final image. Blooming is more prevalent in a physically smaller image sensor with higher resolution because the photosites are packed more tightly together. This problem is not insurmount- able, and even if your image does suffer from blooming problems from time to time, these artifacts won’t necessarily be visible in your final prints.
As you might expect, interpolating the color in a camera with millions of pixels on its image sensor requires a lot of processing power. Such power (and the memory needed to support it) is one reason why digital cameras have stayed so pricey—lots of fancy chips are necessary to make a digital camera.

Extra Pixels
Not all of the photosites in an image sensor are used for recording your image. Some are used to assess the black levels in your image; others are used for determining white balance. Finally, some pixels are masked away altogether. For example, if the sensor has a square array of pixels but your camera manufacturer wants to create a camera that shoots rectangular images, they will mask out some of the pixels on the edge of the sensor to get the picture shape they want.
»»  read more

Selasa, 28 Juli 2009

| 0 komentar |

Basic SLR Cameras

Photography is a very simple process: rays of light reflected from a subject pass through a hole in a box (a camera) to form an image on light-sensitive material (film). But, because light never stays the same and the film may not have the same right amount of sensitivity, the camera needs special controls so that it can be adjusted to allow in the correct amount of light.

These controls don’t make the camera more difficult to use, just more flexible. This flexibilty means that the photographer can shoot many different subjects in different lighting conditions with just one camera. A camera such as 35mm manual focus or auto focus SLR with a range of controls, will enable a photographer to take anything from a happy snaps to breathtaking landscapes, nature subjects, special-effects pictures and, who knows, along the way perhaps some stunning and even inspiring images.

Even high-end 35mm automatic snap-shot cameras, while not versatiles as an SLR, enable you to do much more than just point the camera and press the button.
Both of these camera types are widely available today and both have their pros and cons. A closer look at their features, plus a grounding in camera basics, will help you to make better use of them and improve your photography as a whole.


The Camera

There are two features which controls the amount of light that is transferred on to the film. The first is the aperture, which is a hole with a variable size. Most SLR camera lenses have eight or nine variations of aperture size, known as f-stops. They are shown like this: f/2, f/2.8, f/4, f/5.6, f/8, f/11, f/16, f/22 and f/32. The widest aperture opening in this range is f/2, while the smalest is f/32.

Aperture control the amount of light entering the camera and getting on to the film. They are adjusted manually via an aperture ring, or electronically on an automatic camera.
The lenght of time that the light is allowed into the camera is controlled by the shutter. Because today’s film are so sensitives to light, shutter speeds are mostly in fractions of a second, with only a few shutter speeds lasting for one second or longer. A typical shutter speed range on an SLR is shown as fractions of a second and full seconds like this: 1/1000, 1/500, 1/250, 1/125, 1/60, 1/30, 1/15, 1/8, ¼, ½, 1, 2, 4, 8 and B. The letter B refers to “bulb” or “brief” time. At this setting, commonly known as the “B setting.” The shutter stays open for as long as the camera’s shutter button is pressed in. When you lift your finger off the shutter button, the shutter closes.
So, in order to get a correctly exposed image, the aperture and the shutter speed work together to allow a fixed amount of light for a fixed amount of time into the camera and then on to the film, to produce the image. The whole procedureis called the exposure. A correct exposure setting is when the aperture and the shutter speed chosen produce an image that is not too dark and not too light.

Different combinations of aperture and shutter speed will give the same exposure. For example, a setting of f/8 at 1/60 second will give the same exposure as f/5.6 at 1/125 second. Also, the shutter speed in use will depend on whether the photographer wants to freeze the actions, or wants a moving subjects to record as ablur, or simply wants an in-between shutter speed for a subject that’s not moving.

Fast shutter speeds like 1/125, 1/250, 1/500, 1/1000sec or higher will sharply record moving subjects. Slow shutter speeds of 1/60, 1/30, 1/15, 1/8sec or below won’t be able to record moving subjects sharply.


Exposure Modes
All exposure modes come from two basic types: manual exposure, aperture priority auto exposure and shutter-priority auto exposure.

Manual exposure enables the photographer to choose the aperture and the shutter speed that will give the correct exposure.

Aperture-priority auto exposure is a system in which the photographer chooses an aperture and the camera automatically selects the corrects shutter speed.

shutter-priority auto exposure enables the photographer to chooses a shutter speed and the camera automatically selects the correct aperture.

These exposure modes are found in most 35mm SLRs. Some basic SLRs only have the manual exposure mode, while higher level SLRs have these three modes plus many more.
One of these advanced exposure modes is called program. This is fully automatic exposure mode that chooses and sets the correct aperture and shutter speed. Program’s fully-automatic approach is ideal for compact cameras, almost all of which have this mode. It is also included on SLRs as a snapshot mode, used for general picture-taking.

There are versions of program mode that favor shutter speeds or apertures (as with aperture-priority auto exposure). A further refinement of program mode is something called program shift. This feature enables the user to freely adjust the aperture or shutter speed as needed. The camera’s exposure system will then automatically choose the corresponding shutter speed or aperture that will give a correct exposure. Think of it as a combination of aperture-priority auto and shutter priority auto exposure.

For the SLR user, the automatic exposure modes save time in setting apertures or shutter speeds, or both together. Many SLRs on this types also have an auto exposure (AE) lock feature which enables you to lock the exposure for one part of scene or subject. This is useful if there’s a risk of the camera’s exposure system being “confused” by tricky lighting (it does happen!). In this case use the AE lock to take an auto exposure reading of your main subject. Lock it, then compose your shot and take the picture.


The Metering System
The metering system in a 35mm SLR camera measures the amount of light in the scene and calculates the best-fit exposure value based on the metering mode explained below. Automatic exposure is a standard feature in all SLRs cameras. All you have to do is select the metering mode, point the camera and press the shutter release. Most of the time, this will result in a correct exposure.
The metering method defines which information of the scene is used to calculate the exposure value and how it is determined. Metering modes depend on the camera and the brand, but are mostly variations of the following three types:

Matrix or Evaluative Metering

This is probably the most complex metering mode, offering the best exposure in most circumstances. Essentially, the scene is split up into a matrix of metering zones which are evaluated individually. The overall exposure is based on an algorithm specific to that camera, the details of which are closely guarded by the manufacturer. Often they are based on comparing the measurements to the exposure of typical scenes.

Center-weighted Average Metering

Probably the most common metering method implemented in nearly every SLRs camera and the default for those SLR cameras which don't offer metering mode selection. This method averages the exposure of the entire frame but gives extra weight to the center and is ideal for portraits.

Spot (Partial) Metering

Spot metering allows you to meter the subject in the center of the frame (or on some cameras at the selected AF point). Only a small area of the whole frame is metered and the exposure of the rest of the frame is ignored. This type of metering is useful for brightly backlit, macro, and moon shots.


The SLR Viewfinder

The viewfinder is the single most important user interface on any camera. Throughout the history of cameras, the method of aiming the camera accurately and communicating its view to the operator is what has determined and defined most different basic camera types.
Yet the viewfinder is perhaps the single most fudged and botched aspect of today's 35mm SLRs. With the exception of the Contax Aria of the late '90s and the more recent Minolta Maxxum 7, virtually all entry-level to mid-range cameras skimp on the viewfinder. The worst offenders are cameras that are meant to be cheap (they have mirror-box prisms) or cameras that are meant to be small (which usually have poorer coverage).

For the Sake of Clarity
To be clear, let's define a few terms about viewfinders, just in case you're not entirely up to speed. (And if you aren't, don't feel bad. Most people aren't. Why do you think the manufacturers are able to get away with such blatant skimping? An educated consumer is a dangerous consumer. Oh, did I say "dangerous"? I meant "demanding." Or maybe "discriminating." Undesirable, in any event.)










Magnification:
this refers to how big the viewfinder image appears to be in an absolute sense. Like a batting average, it's usually expressed as some decimal fraction of one. 1X is the size that things appear to be when you look at them with your eye (a.k.a. "the naked eye"). Now, obviously, magnification also changes when you use different lens focal lengths — telephotos make things look bigger, wide-angles make things look smaller. So camera magnification is specified with a 50mm lens. Less often stated is that the lens must be set at infinity, because magnification also changes slightly depending on how close or far you focus the lens.

Let's say a camera's magnification is .75X. What this means is that your camera, with a 50mm lens on it, set at infinity, makes things appear to be three-quarters the size they look to be with your naked eye. .5X means half as big; .9X means nine-tenths as big. Better cameras have higher magnification. .88X is better than .67X. You're getting this.
I hope it stands to reason that magnification also determines the apparent relative size of the viewfinder image rectangle. I once tried an interesting little experiment — with identical 50mm lenses on both, I held a Pentax ME Super (high magnification) to one eye and a Pentax ZX-5n (low magnification) to the other. The ZX-5n's viewfinder image fit inside the ME Super's with lots of room to spare.

Coverage: this compares what you can see in the viewfinder with what will be recorded on the film. It's reported as a percentage. If you can see through the viewfinder half of what will be on the negative, that would be 50% coverage.

To further confound matters, coverage is sometimes reported as a linear measure and sometimes as an area measure. To simplify this, imagine a big square drawn on graph paper that has ten little squares per side. The linear measure is 10 x 10 little squares, and the area measure is 100 little squares. Now imagine that we're going to draw a slightly smaller square inside the big one that's smaller by one little square on each side. That square has eight little squares on each side. The linear coverage of the inside square is 80% of the larger one (8 instead of 10); the area coverage is 64% (8 x 8 instead of 10 x 10). You can see from this that when one camera manufacturer reports that its viewfinder has 92% coverage and another reports 95% coverage, you still can't quite be sure how they compare, because one might be reporting linear coverage and the other area coverage.

Now, if you were no expert and just taking a stab at this, you'd probably guess that you would want to see in the viewfinder all of the picture you're about to take. It stands to reason you don't want to see half of it, or a tenth of it, so why wouldn't you just want to see all of it? As with many things, however, it turns out that the uncomplicated answer is not the correct one.


The Lens

The third most important aspect of an SLR camera, apart from enabling the photographer to see through the lens to accurately compose his pictures and to asses focus, is lens interchangeability. Wheter the SLR is a manual focus or an autofocus model, most have access to a huge range of interchangeable lenses that can be used with them.

Manual focus lenses can only be used with manual focus SLRs, while autofocus lenses can only be used with AF SLRs. However, there are exceptions to this rule, namely Nikon. They allow a certain amount of lens interchangeability between their manual and autofocus SLR models. Some independent lens manufacturers (those that don’t have SLR ranges of their own) also produce lenses which allow this type of flexibility.

Manual lenses have a manual focus ring and an aperture ring which has a full range of the most commonly used apertures. A manual focus zoom lens also has a zoom ring, which the photographer can use to alter the focal length. This varies the amount of the scene that will be seen in the viewfinder and the amount of the scene that will therefore be included on film. Typical zoom lens focal lengths are 24-70mm and 70-200mm though there are others with different focal lengths.

























Fast and slow lenses

Some lenses have wide maximum apertures, such as f/1.8 and f/1.4 (the widest aperture commonly found on lenses). There are rarer, very expensive lenses of f/1.2 or even f/1.
Others start with smaller maximum apertures such as f/3.5 and even f/5.6. Wide maximum aperture lenses are preferred to as “fast” lenses, while those with smaller maximum apertures are called “slow” lenses.
Zoom lenses have slower maximum apertures than fixed focal length lenses (ie 28mm, 50mm, 200mm and so on). And a zoom lens’s maximum aperture changes, becoming smaller as it goes toward the long end of its focal length range. So while a 70-200mm f/3.5-5.6 zoom may have a maximum aperture of f/3.5 at the 70mm setting, this will become f/5.6 at the 200mm setting. With fixed focal length lenses, and some zooms, there is just one maximum aperture.

Focusing
Manual focus SLRs need to have the lens focused manually in order to give a sharp image. Autofocus (AF) SLRs are able to automatically focus the lens when the shutter button is pressed. Both have their benefits and though AF SLRs are generally more sophisticated and easy to use, manual focus SLRs mostly cheaper and give a bit more user-control. Some manual focus SLRs are expensive too, and, while lacking autofocus, have other sophisticated features. Because of this and because of the amount of user-control they offer, many of these are popular with proffesional photographers. But many pros in certain fields (for example, sports and news photography) use AF SLRs for their sheer convenience.


Depth of Field

A correctly focused lens will produce a sharp image of the subject you’re photographing. But some subjects such as landscapes, also need to be sharp in the foreground and in the background, as well as the main subjects area. The area of sharpness that extends from in front of, as well as behind, the subject is known as the depth of field.
Wide apertures , such as f/2, f/2.8, f/4 and f/5.6, give shallow depth of field, that is, there is only a narrow area across the subject area which is sharp.
Small apertures – f/8, f/11, f/16, f/22 and f/32 – give greater depth of field, which means that a larger portions of the scene in front of and behind the subject is sharp, as well as the subject itself.

Greater depth of field is ideal for landscapes or just general subjects where you want everything to be sharp focus.














shallow depth of field has its uses, too. A wide aperture can make background details unclear, so the viewer will only concentrate on the part of the scene that is sharp – the main subjects. This can be very effective if photographing a person against a busy – looking background. Use a wide aperture and the distracting background will become indistinct. Shutter-priority exposure modes can do this for you.



















A second benefit of shallow depth of field is that the wider the aperture the faster will be the corresponding shutter speed. This is excellent for photos of people, as a movement, even changing facial expressions, can then be more accurately captured by the camera.


Flash

Flash units are either built-in to the camera or separate. Nearly all the compact cameras and some autofocus SLRs have a built-in flash unit. On SLRs this unit is situated on top of the camera. These units deliver enough power to be useful as emergency flash illumination for subjects within a close range. Some are even more sophisticated and can alter the coverage of the flash illumination to correspond to a (limited) range of lens focal lengths. But none have the power or advanced features of a separate flashgun. This separate unit fits into the hotshoe found on the top of all SLRs. While many of these flashguns are reasonably priced, the more powerful ones can cost as much as a mid-priced SLR.

This is because they are capable of giving illumination over a larger area. And they have additional features that make further use of this power. But there are many low to mid-priced models which give a good combination of reasonable power output plus other features.




















Flash sync
The correct flash synchronization or sync speed is indicated by an “X” near the appropriate shutter speed on the shutter speed dial of a manual focus SLR. Alternatively, the sync speed may be highlighted by being in a different color to the other shutter speeds.
An autofocus SLR’s flash sync speed is shown in the instruction book. Typical sync speed on older SLRs are 1/60sec or 1/90sec. On newer models the sync speeds are usually 1/125 or 1/250sec.

Although shutter speeds faster than the recommended flash sync speed will cause blacking out of part of the image, shutter speed slower than the sync speed will not have this effect.
With very slow shutter speeds, a subject that’s moving will appear as a blur, while flash will sharply freeze part of it. This gives an effect of the movement though if you want to avoid this then stick to your SLR’s fastest flash sync speed for sharp focus shots.
»»  read more