Category: To Do

  • Colour Management

     Sources

    Cambridge in Colour: Colour Management and Printing series

    Underlying concepts and principles: Human Perception; Bit DepthBasics of digital cameras: pixels

    Color Management from camera to display Part 1: Concept and Overview; Part 2: Color Spaces; Part 3: Color Space Conversion; Understanding Gamma Correction

    Bit Depth

    Every color pixel in a digital image is created through some combination of the three primary colors: red, green, and blue –  often referred to as a “color channel”. Bit depth quantifies how many unique colors are available in an image’s color palette in terms of the number of 0’s and 1’s, or “bits,” which are used to specify each color channel (bpc) or per pixel (bpp). Images with higher bit depths can encode more shades or colors – or intensity of values -since there are more combinations of 0’s and 1’s available.

    Most color images from digital cameras have 8-bits per channel and so they can use a total of eight 0’s and 1’s. This allows for 28 or 256 different combinations—translating into 256 different intensity values for each primary color. When all three primary colors are combined at each pixel, this allows for as many as 28*3 or 16,777,216 different colors, or “true color.” This is referred to as 24 bits per pixel since each pixel is composed of three 8-bit color channels. The number of colors available for any X-bit image is just 2X if X refers to the bits per pixel and 23X if X refers to the bits per channel. The following table illustrates different image types in terms of bits (bit depth), total colors available, and common names.

    Bits Per Pixel Number of Colors Available Common Name(s)
    1 2 Monochrome
    2 4 CGA
    4 16 EGA
    8 256 VGA
    16 65536 XGA, High Color
    24 16777216 SVGA, True Color
    32 16777216 + Transparency  
    48 281 Trillion  
    USEFUL TIPS
    • The human eye can only discern about 10 million different colors, so saving an image in any more than 24 bpp is excessive if the only intended purpose is for viewing. On the other hand, images with more than 24 bpp are still quite useful since they hold up better under post-processing (see “Posterization Tutorial“).
    • Color gradations in images with less than 8-bits per color channel can be clearly seen in the image histogram.
    • The available bit depth settings depend on the file type. Standard JPEG and TIFF files can only use 8-bits and 16-bits per channel, respectively.

    BASICS OF DIGITAL CAMERA PIXELS

    The continuous advance of digital camera technology can be quite confusing because new terms are constantly being introduced. This tutorial aims to clear up some of this digital pixel confusion — particularly for those who are either considering or have just purchased their first digital camera. Concepts such as sensor size, megapixels, dithering and print size are discussed.

     

    OVERVIEW OF COLOR MANAGEMENT

    “Color management” is a process where the color characteristics for every device in the imaging chain is known precisely and utilized in color reproduction. It often occurs behind the scenes and doesn’t require any intervention, but when color problems arise, understanding this process can be critical.

    In digital photography, this imaging chain usually starts with the camera and concludes with the final print, and may include a display device in between:

    digital imaging chain

    Many other imaging chains exist, but in general, any device which attempts to reproduce color can benefit from color management. For example, with photography it is often critical that your prints or online gallery appear how they were intended. Color management cannot guarantee identical color reproduction, as this is rarely possible, but it can at least give you more control over any changes which may occur.

    THE NEED FOR PROFILES & REFERENCE COLORS

    Color reproduction has a fundamental problem: a given “color number” doesn’t necessarily produce the same color in all devices. We use an example of spiciness to convey both why this creates a problem, and how it is managed.

    Let’s say that you’re at a restaurant and are about to order a spicy dish. Although you enjoy spiciness, your taste buds are quite sensitive, so you want to be careful that you specify a pleasurable amount. The dilemma is this: simply saying “medium” might convey one level of spice to a cook in Thailand, and a completely different level to someone from England. Restaurants could standardize this based on the number of peppers included in the dish, but this alone wouldn’t be sufficient. Spice also depends on how sensitive the taster is to each pepper:

    calibration table

    To solve your spiciness dilemma, you could undergo a one-time taste test where you eat a series of dishes, with each containing slightly more peppers (shown above). You could then create a personalized table to carry with you at restaurants which specifies that 3 equals “mild,” 5 equals “medium,” and so on (assuming that all peppers are the same). Next time, when you visit a restaurant and say “medium,” the waiter could look at your personal table and translate this into a standardized concentration of peppers. This waiter could then go to the cook and say to make the dish “extra mild,” knowing all too well what this concentration of peppers would actually mean to the cook.

    As a whole, this process involved (1) characterizing each person’s sensitivity to spice, (2)standardizing this spice based on a concentration of peppers and (3) being able to collectively use this information to translate the “medium” value from one person into an “extra mild” value for another. These same three principles are used to manage color.

    COLOR PROFILES

    A device’s color response is characterized similar to how the personalized spiciness table was created in the above example. Various numbers are sent to this device, and its output is measured in each instance:

    Input Number (Green)   Output Color
    Device 1 Device 2
    200    
    150    
    100    
    50    

    Real-world color profiles include all three colors, more values, and are usually more sophisticated than the above table — but the same core principles apply. However, just as with the spiciness example, a profile on its own is insufficient. These profiles have to be recorded in relation to standardized reference colors, and you need color-aware software that can use these profiles to translate color between devices.

    COLOR MANAGEMENT OVERVIEW

    Putting it all together, the following diagram shows how these concepts might apply when converting color between a display device and a printer:

    display device   printer output device
    Characterized
    Input Device
      Standardized
    Profile Connection Space
      Characterized
    Output Device
    Additive RGB Colors
    RGB
    Color Profile
    (color space)
    CMM Translation CMM Translation Subtractive CMYK Colors
    CMYK
    Color Profile
    (color space)
    1. Characterize. Every color-managed device requires a personalized table, or “color profile,” which characterizes the color response of that particular device.
    2. Standardize. Each color profile describes these colors relative to a standardized set of reference colors (the “Profile Connection Space”).
    3. Translate. Color-managed software then uses these standardized profiles to translate color from one device to another. This is usually performed by a color management module (CMM).

    The above color management system was standardized by the International Color Consortium (ICC), and is now used in most computers. It involves several key concepts: color profiles (discussed above), color spaces, and translation between color spaces.

    Color Space. This is just a way of referring to the collection of colors/shades that are described by a particular color profile. Put another way, it describes the set of all realizable color combinations. Color spaces are therefore useful tools for understanding the color compatibility between two different devices. See the tutorial on color spaces for more on this topic.

    Profile Connection Space (PCS). This is a color space that serves as a standardized reference (a “reference space”), since it is independent of any particular device’s characteristics. The PCS is usually the set of all visible colors defined by the Commission International de l’éclairage (CIE) and used by the ICC.

    Note: The thin trapezoidal region drawn within the PCS is what is called a “working space.” The working space is used in image editing programs (such as Adobe Photoshop), and defines the subset of colors available to work with when performing any image editing.

    Color Translation. The color management module (CMM) is the workhorse of color management, and is what performs all the calculations needed to translate from one color space into another. Contrary to previous examples, this is rarely a clean and simple process. For example, what if the printer weren’t capable of producing as intense a color as the display device? This is called a “gamut mismatch,” and would mean that accurate reproduction is impossible. In such cases the CMM therefore just has to aim for the best approximation that it can. See the tutorial on color space conversion for more on this topic.

    UNDERSTANDING GAMMA CORRECTION

    Gamma is an important but seldom understood characteristic of virtually all digital imaging systems. It defines the relationship between a pixel’s numerical value and its actual luminance. Without gamma, shades captured by digital cameras wouldn’t appear as they did to our eyes (on a standard monitor). It’s also referred to as gamma correction, gamma encoding or gamma compression, but these all refer to a similar concept. Understanding how gamma works can improve one’s exposure technique, in addition to helping one make the most of image editing.

    WHY GAMMA IS USEFUL

    1. Our eyes do not perceive light the way cameras do. With a digital camera, when twice the number of photons hit the sensor, it receives twice the signal (a “linear” relationship). Pretty logical, right? That’s not how our eyes work. Instead, we perceive twice the light as being only a fraction brighter — and increasingly so for higher light intensities (a “nonlinear” relationship).

    linear vs nonlinear gamma - cameras vs human eyes  
    Reference Tone
    Perceived as 50% as Bright
    by Our Eyes
    Detected as 50% as Bright
    by the Camera

    Refer to the tutorial on the photoshop curves tool if you’re having trouble interpreting the graph.
    Accuracy of comparison depends on having a well-calibrated monitor set to a display gamma of 2.2.
    Actual perception will depend on viewing conditions, and may be affected by other nearby tones.
    For extremely dim scenes, such as under starlight, our eyes begin to see linearly like cameras do.

    Compared to a camera, we are much more sensitive to changes in dark tones than we are to similar changes in bright tones. There’s a biological reason for this peculiarity: it enables our vision to operate over a broader range of luminance. Otherwise the typical range in brightness we encounter outdoors would be too overwhelming.

    But how does all of this relate to gamma? In this case, gamma is what translates between our eye’s light sensitivity and that of the camera. When a digital image is saved, it’s therefore “gamma encoded” — so that twice the value in a file more closely corresponds to what we would perceive as being twice as bright.

    Technical Note: Gamma is defined by Vout = Vingamma , where Vout is the output luminance value and Vin is the input/actual luminance value. This formula causes the blue line above to curve. When gamma<1, the line arches upward, whereas the opposite occurs with gamma>1.

    2. Gamma encoded images store tones more efficiently. Since gamma encoding redistributes tonal levels closer to how our eyes perceive them, fewer bits are needed to describe a given tonal range. Otherwise, an excess of bits would be devoted to describe the brighter tones (where the camera is relatively more sensitive), and a shortage of bits would be left to describe the darker tones (where the camera is relatively less sensitive):

    Original: smooth 8-bit gradient (256 levels)
      Encoded using only 32 levels (5 bits)
    Linear
    Encoding:
    linearly encoded gradient
    Gamma
    Encoding:
    gamma encoded gradient

    Note: Above gamma encoded gradient shown using a standard value of 1/2.2
    See the tutorial on bit depth for a background on the relationship between levels and bits.

    Notice how the linear encoding uses insufficient levels to describe the dark tones — even though this leads to an excess of levels to describe the bright tones. On the other hand, the gamma encoded gradient distributes the tones roughly evenly across the entire range (“perceptually uniform”). This also ensures that subsequent image editing, color andhistograms are all based on natural, perceptually uniform tones.

    However, real-world images typically have at least 256 levels (8 bits), which is enough to make tones appear smooth and continuous in a print. If linear encoding were used instead, 8X as many levels (11 bits) would’ve been required to avoid image posterization.

    GAMMA WORKFLOW: ENCODING & CORRECTION

    Despite all of these benefits, gamma encoding adds a layer of complexity to the whole process of recording and displaying images. The next step is where most people get confused, so take this part slowly. A gamma encoded image has to have “gamma correction” applied when it is viewed — which effectively converts it back into light from the original scene. In other words, the purpose of gamma encoding is for recording the image — not for displaying the image. Fortunately this second step (the “display gamma”) is automatically performed by your monitor and video card. The following diagram illustrates how all of this fits together:

    RAW Camera Image is Saved as a JPEG File   JPEG is Viewed on a Computer Monitor   Net Effect
    image file gamma + display gamma = system gamma
    1. Image File Gamma   2. Display Gamma   3. System Gamma

    1. Depicts an image in the sRGB color space (which encodes using a gamma of approx. 1/2.2).
    2. Depicts a display gamma equal to the standard of 2.2

    1. Image Gamma. This is applied either by your camera or RAW development software whenever a captured image is converted into a standard JPEG or TIFF file. It redistributes native camera tonal levels into ones which are more perceptually uniform, thereby making the most efficient use of a given bit depth.

    2. Display Gamma. This refers to the net influence of your video card and display device, so it may in fact be comprised of several gammas. The main purpose of the display gamma is to compensate for a file’s gamma — thereby ensuring that the image isn’t unrealistically brightened when displayed on your screen. A higher display gamma results in a darker image with greater contrast.

    3. System Gamma. This represents the net effect of all gamma values that have been applied to an image, and is also referred to as the “viewing gamma.” For faithful reproduction of a scene, this should ideally be close to a straight line (gamma = 1.0). A straight line ensures that the input (the original scene) is the same as the output (the light displayed on your screen or in a print). However, the system gamma is sometimes set slightly greater than 1.0 in order to improve contrast. This can help compensate for limitations due to the dynamic range of a display device, or due to non-ideal viewing conditions and image flare.

    IMAGE FILE GAMMA

    The precise image gamma is usually specified by a color profile that is embedded within the file. Most image files use an encoding gamma of 1/2.2 (such as those using sRGB and Adobe RGB 1998 color), but the big exception is with RAW files, which use a linear gamma. However, RAW image viewers typically show these presuming a standard encoding gamma of 1/2.2, since they would otherwise appear too dark:

    linear RAWLinear RAW Image
    (image gamma = 1.0)
    gamma encoded sRGB imageGamma Encoded Image
    (image gamma = 1/2.2)

    If no color profile is embedded, then a standard gamma of 1/2.2 is usually assumed. Files without an embedded color profile typically include many PNG and GIF files, in addition to some JPEG images that were created using a “save for the web” setting.

    Technical Note on Camera Gamma. Most digital cameras record light linearly, so their gamma is assumed to be 1.0, but near the extreme shadows and highlights this may not hold true. In that case, the file gamma may represent a combination of the encoding gamma and the camera’s gamma. However, the camera’s gamma is usually negligible by comparison. Camera manufacturers might also apply subtle tonal curves, which can also impact a file’s gamma.

    DISPLAY GAMMA

    This is the gamma that you are controlling when you perform monitor calibration and adjust your contrast setting. Fortunately, the industry has converged on a standard display gamma of 2.2, so one doesn’t need to worry about the pros/cons of different values. Older macintosh computers used a display gamma of 1.8, which made non-mac images appear brighter relative to a typical PC, but this is no longer the case.

    Recall that the display gamma compensates for the image file’s gamma, and that the net result of this compensation is the system/overall gamma. For a standard gamma encoded image file (), changing the display gamma () will therefore have the following overall impact () on an image:

    gamma curves chart with a display gamma of 1.0
    Display Gamma 1.0 Gamma 1.0
    gamma curves chart with a display gamma of 1.8
    Display Gamma 1.8 Gamma 1.8
    gamma curves chart with a display gamma of 2.2
    Display Gamma 2.2 Gamma 2.2
    gamma curves chart with a display gamma of 4.0
    Display Gamma 4.0 Gamma 4.0

    Diagrams assume that your display has been calibrated to a standard gamma of 2.2.
    Recall from before that the image file gamma () plus the display gamma () equals the overall system gamma (). Also note how higher gamma values cause the red curve to bend downward.

    If you’re having trouble following the above charts, don’t despair! It’s a good idea to first have an understanding of how tonal curves impact image brightness and contrast. Otherwise you can just look at the portrait images for a qualitative understanding.

    How to interpret the charts. The first picture (far left) gets brightened substantially because the image gamma () is uncorrected by the display gamma (), resulting in an overall system gamma () that curves upward. In the second picture, the display gamma doesn’t fully correct for the image file gamma, resulting in an overall system gamma that still curves upward a little (and therefore still brightens the image slightly). In the third picture, the display gamma exactly corrects the image gamma, resulting in an overall linear system gamma. Finally, in the fourth picture the display gamma over-compensates for the image gamma, resulting in an overall system gamma that curves downward (thereby darkening the image).

    The overall display gamma is actually comprised of (i) the native monitor/LCD gamma and (ii) any gamma corrections applied within the display itself or by the video card. However, the effect of each is highly dependent on the type of display device.

    CRT Monitor   LCD Monitor
    CRT Monitors LCD (Flat Panel) Monitors

    CRT Monitors. Due to an odd bit of engineering luck, the native gamma of a CRT is 2.5 — almost the inverse of our eyes. Values from a gamma-encoded file could therefore be sent straight to the screen and they would automatically be corrected and appear nearly OK. However, a small gamma correction of ~1/1.1 needs to be applied to achieve an overall display gamma of 2.2. This is usually already set by the manufacturer’s default settings, but can also be set during monitor calibration.

    LCD Monitors. LCD monitors weren’t so fortunate; ensuring an overall display gamma of 2.2 often requires substantial corrections, and they are also much less consistent than CRT’s. LCDs therefore require something called a look-up table (LUT) in order to ensure that input values are depicted using the intended display gamma (amongst other things). See the tutorial on monitor calibration: look-up tables for more on this topic.

    Technical Note: The display gamma can be a little confusing because this term is often used interchangeably with gamma correction, since it corrects for the file gamma. However, the values given for each are not always equivalent. Gamma correction is sometimes specified in terms of the encoding gamma that it aims to compensate for — not the actual gamma that is applied. For example, the actual gamma applied with a “gamma correction of 1.5” is often equal to 1/1.5, since a gamma of 1/1.5 cancels a gamma of 1.5 (1.5 * 1/1.5 = 1.0). A higher gamma correction value might therefore brighten the image (the opposite of a higher display gamma).

    OTHER NOTES & FURTHER READING

    Other important points and clarifications are listed below.

    • Dynamic Range. In addition to ensuring the efficient use of image data, gamma encoding also actually increases the recordable dynamic range for a given bit depth. Gamma can sometimes also help a display/printer manage its limited dynamic range (compared to the original scene) by improving image contrast.
    • Gamma Correction. The term “gamma correction” is really just a catch-all phrase for when gamma is applied to offset some other earlier gamma. One should therefore probably avoid using this term if the specific gamma type can be referred to instead.
    • Gamma Compression & Expansion. These terms refer to situations where the gamma being applied is less than or greater than one, respectively. A file gamma could therefore be considered gamma compression, whereas a display gamma could be considered gamma expansion.
    • Applicability. Strictly speaking, gamma refers to a tonal curve which follows a simple power law (where Vout = Vingamma), but it’s often used to describe other tonal curves. For example, the sRGB color space is actually linear at very low luminosity, but then follows a curve at higher luminosity values. Neither the curve nor the linear region follow a standard gamma power law, but the overall gamma is approximated as 2.2.
    • Is Gamma Required? No, linear gamma (RAW) images would still appear as our eyes saw them — but only if these images were shown on a linear gamma display. However, this would negate gamma’s ability to efficiently record tonal levels.

    For more on this topic, also visit the following tutorials:

    !!

    Color can only exist when three components are present: a viewer, an object, and light. Although pure white light is perceived as colorless, it actually contains all colors in the visible spectrum. When white light hits an object, it selectively blocks some colors and reflects others; only the reflected colors contribute to the viewer’s perception of color.

    Prism: White Light and the Visible Spectrum
    Human Vision

    The human eye senses this spectrum using a combination of rod and cone cells for vision. Rod cells are better for low-light vision, but can only sense the intensity of light, whereas whilecone cells can also discern color, they function best in bright light.

    Three types of cone cells exist in your eye, with each being more sensitive to either short (S), medium (M), or long (L) wavelength light. The set of signals possible at all three cone cells describes the range of colors we can see with our eyes. The diagram below illustrates the relative sensitivity of each type of cell for the entire visible spectrum. These curves are often also referred to as the “tristimulus functions.”

    Select View: Cone Cells Luminosity



    Raw data courtesy of the Colour and Vision Research Laboratories (CVRL), UCL.

    Note how each type of cell does not just sense one color, but instead has varying degrees of sensitivity across a broad range of wavelengths. Move your mouse over “luminosity” to see which colors contribute the most towards our perception of brightness. Also note how human color perception is most sensitive to light in the yellow-green region of the spectrum; this is utilized by the bayer array in modern digital cameras.

    ADDITIVE & SUBTRACTIVE COLOR MIXING

    Virtually all our visible colors can be produced by utilizing some combination of the three primary colors, either by additive or subtractive processes. Additive processes create color by adding light to a dark background, whereas subtractive processes use pigments or dyes to selectively block white light. A proper understanding of each of these processes creates the basis for understanding color reproduction.

    Additive Primary ColorsAdditive
    Subtractive Primary ColorsSubtractive

    The color in the three outer circles are termed primary colors, and are different in each of the above diagrams. Devices which use these primary colors can produce the maximum range of color. Monitors release light to produce additive colors, whereas printers use pigments or dyes to absorb light and create subtractive colors. This is why nearly all monitors use a combination of red, green and blue (RGB) pixels, whereas most color printers use at least cyan, magenta and yellow (CMY) inks. Many printers also include black ink in addition to cyan, magenta and yellow (CMYK) because CMY alone cannot produce deep enough shadows.

    Additive Color Mixing
    (RGB Color)
      Subtractive Color Mixing
    (CMYK Color)
    Red + Green Yellow Cyan + Magenta Blue
    Green + Blue Cyan Magenta + Yellow Red
    Blue + Red Magenta Yellow + Cyan Green
    Red + Green + Blue White Cyan + Magenta + Yellow Black

    Subtractive processes are more susceptible to changes in ambient light, because this light is what becomes selectively blocked to produce all their colors. This is why printed color processes require a specific type of ambient lighting in order to accurately depict colors.

    COLOR PROPERTIES: HUE & SATURATION

    Color has two unique components that set it apart from achromatic light: hue and saturation. Visually describing a color based on each of these terms can be highly subjective, however each can be more objectively illustrated by inspecting the light’s color spectrum.

    Naturally occurring colors are not just light at one wavelength, but actually contain a whole range of wavelengths. A color’s “hue” describes which wavelength appears to be most dominant. The object whose spectrum is shown below would likely be perceived as bluish, even though it contains wavelengths throughout the spectrum.

    Color Hue
    Visible Spectrum

    Although this spectrum’s maximum happens to occur in the same region as the object’s hue, it is not a requirement. If this object instead had separate and pronounced peaks in just the the red and green regions, then its hue would instead be yellow (see the additive color mixing table).

    A color’s saturation is a measure of its purity. A highly saturated color will contain a very narrow set of wavelengths and appear much more pronounced than a similar, but less saturated color. The following example illustrates the spectrum for both a highly saturated and less saturated shade of blue.

    Select Saturation Level: Low High

    Spectral Curves for Low and High Saturation Color

    —————————————————-
    To be reviewed and updated together with work on Book Design 1.

     Sources

    Cambridge in Colour: Colour Management and Printing series

    Underlying concepts and principles: Human Perception; Bit DepthBasics of digital cameras: pixels

    Color Management from camera to display Part 1: Concept and Overview; Part 2: Color Spaces; Part 3: Color Space Conversion; Understanding Gamma Correction

    Bit Depth

    Every color pixel in a digital image is created through some combination of the three primary colors: red, green, and blue –  often referred to as a “color channel”. Bit depth quantifies how many unique colors are available in an image’s color palette in terms of the number of 0’s and 1’s, or “bits,” which are used to specify each color channel (bpc) or per pixel (bpp). Images with higher bit depths can encode more shades or colors – or intensity of values -since there are more combinations of 0’s and 1’s available.

    Most color images from digital cameras have 8-bits per channel and so they can use a total of eight 0’s and 1’s. This allows for 28 or 256 different combinations—translating into 256 different intensity values for each primary color. When all three primary colors are combined at each pixel, this allows for as many as 28*3 or 16,777,216 different colors, or “true color.” This is referred to as 24 bits per pixel since each pixel is composed of three 8-bit color channels. The number of colors available for any X-bit image is just 2X if X refers to the bits per pixel and 23X if X refers to the bits per channel. The following table illustrates different image types in terms of bits (bit depth), total colors available, and common names.

    Bits Per Pixel Number of Colors Available Common Name(s)
    1 2 Monochrome
    2 4 CGA
    4 16 EGA
    8 256 VGA
    16 65536 XGA, High Color
    24 16777216 SVGA, True Color
    32 16777216 + Transparency
    48 281 Trillion
    USEFUL TIPS
    • The human eye can only discern about 10 million different colors, so saving an image in any more than 24 bpp is excessive if the only intended purpose is for viewing. On the other hand, images with more than 24 bpp are still quite useful since they hold up better under post-processing (see “Posterization Tutorial“).
    • Color gradations in images with less than 8-bits per color channel can be clearly seen in the image histogram.
    • The available bit depth settings depend on the file type. Standard JPEG and TIFF files can only use 8-bits and 16-bits per channel, respectively.

    BASICS OF DIGITAL CAMERA PIXELS

    The continuous advance of digital camera technology can be quite confusing because new terms are constantly being introduced. This tutorial aims to clear up some of this digital pixel confusion — particularly for those who are either considering or have just purchased their first digital camera. Concepts such as sensor size, megapixels, dithering and print size are discussed.

     OVERVIEW OF COLOR MANAGEMENT

    “Color management” is a process where the color characteristics for every device in the imaging chain is known precisely and utilized in color reproduction. It often occurs behind the scenes and doesn’t require any intervention, but when color problems arise, understanding this process can be critical.

    In digital photography, this imaging chain usually starts with the camera and concludes with the final print, and may include a display device in between:

    digital imaging chain

    Many other imaging chains exist, but in general, any device which attempts to reproduce color can benefit from color management. For example, with photography it is often critical that your prints or online gallery appear how they were intended. Color management cannot guarantee identical color reproduction, as this is rarely possible, but it can at least give you more control over any changes which may occur.

    THE NEED FOR PROFILES & REFERENCE COLORS

    Color reproduction has a fundamental problem: a given “color number” doesn’t necessarily produce the same color in all devices. We use an example of spiciness to convey both why this creates a problem, and how it is managed.

    Let’s say that you’re at a restaurant and are about to order a spicy dish. Although you enjoy spiciness, your taste buds are quite sensitive, so you want to be careful that you specify a pleasurable amount. The dilemma is this: simply saying “medium” might convey one level of spice to a cook in Thailand, and a completely different level to someone from England. Restaurants could standardize this based on the number of peppers included in the dish, but this alone wouldn’t be sufficient. Spice also depends on how sensitive the taster is to each pepper:

    calibration table

    To solve your spiciness dilemma, you could undergo a one-time taste test where you eat a series of dishes, with each containing slightly more peppers (shown above). You could then create a personalized table to carry with you at restaurants which specifies that 3 equals “mild,” 5 equals “medium,” and so on (assuming that all peppers are the same). Next time, when you visit a restaurant and say “medium,” the waiter could look at your personal table and translate this into a standardized concentration of peppers. This waiter could then go to the cook and say to make the dish “extra mild,” knowing all too well what this concentration of peppers would actually mean to the cook.

    As a whole, this process involved (1) characterizing each person’s sensitivity to spice, (2)standardizing this spice based on a concentration of peppers and (3) being able to collectively use this information to translate the “medium” value from one person into an “extra mild” value for another. These same three principles are used to manage color.

    COLOR PROFILES

    A device’s color response is characterized similar to how the personalized spiciness table was created in the above example. Various numbers are sent to this device, and its output is measured in each instance:

    Input Number (Green) Output Color
    Device 1 Device 2
    200
    150
    100
    50

    Real-world color profiles include all three colors, more values, and are usually more sophisticated than the above table — but the same core principles apply. However, just as with the spiciness example, a profile on its own is insufficient. These profiles have to be recorded in relation to standardized reference colors, and you need color-aware software that can use these profiles to translate color between devices.

    COLOR MANAGEMENT OVERVIEW

    Putting it all together, the following diagram shows how these concepts might apply when converting color between a display device and a printer:

    display device printer output device
    Characterized
    Input Device
    Standardized
    Profile Connection Space
    Characterized
    Output Device
    Additive RGB Colors
    RGB
    Color Profile
    (color space)
    CMM Translation CMM Translation Subtractive CMYK Colors
    CMYK
    Color Profile
    (color space)
    1. Characterize. Every color-managed device requires a personalized table, or “color profile,” which characterizes the color response of that particular device.
    2. Standardize. Each color profile describes these colors relative to a standardized set of reference colors (the “Profile Connection Space”).
    3. Translate. Color-managed software then uses these standardized profiles to translate color from one device to another. This is usually performed by a color management module (CMM).

    The above color management system was standardized by the International Color Consortium (ICC), and is now used in most computers. It involves several key concepts: color profiles (discussed above), color spaces, and translation between color spaces.

    Color Space. This is just a way of referring to the collection of colors/shades that are described by a particular color profile. Put another way, it describes the set of all realizable color combinations. Color spaces are therefore useful tools for understanding the color compatibility between two different devices. See the tutorial on color spaces for more on this topic.

    Profile Connection Space (PCS). This is a color space that serves as a standardized reference (a “reference space”), since it is independent of any particular device’s characteristics. The PCS is usually the set of all visible colors defined by the Commission International de l’éclairage (CIE) and used by the ICC.

    Note: The thin trapezoidal region drawn within the PCS is what is called a “working space.” The working space is used in image editing programs (such as Adobe Photoshop), and defines the subset of colors available to work with when performing any image editing.

    Color Translation. The color management module (CMM) is the workhorse of color management, and is what performs all the calculations needed to translate from one color space into another. Contrary to previous examples, this is rarely a clean and simple process. For example, what if the printer weren’t capable of producing as intense a color as the display device? This is called a “gamut mismatch,” and would mean that accurate reproduction is impossible. In such cases the CMM therefore just has to aim for the best approximation that it can. See the tutorial on color space conversion for more on this topic.

    UNDERSTANDING GAMMA CORRECTION

    Gamma is an important but seldom understood characteristic of virtually all digital imaging systems. It defines the relationship between a pixel’s numerical value and its actual luminance. Without gamma, shades captured by digital cameras wouldn’t appear as they did to our eyes (on a standard monitor). It’s also referred to as gamma correction, gamma encoding or gamma compression, but these all refer to a similar concept. Understanding how gamma works can improve one’s exposure technique, in addition to helping one make the most of image editing.

    WHY GAMMA IS USEFUL

    1. Our eyes do not perceive light the way cameras do. With a digital camera, when twice the number of photons hit the sensor, it receives twice the signal (a “linear” relationship). Pretty logical, right? That’s not how our eyes work. Instead, we perceive twice the light as being only a fraction brighter — and increasingly so for higher light intensities (a “nonlinear” relationship).

    linear vs nonlinear gamma - cameras vs human eyes
    Reference Tone
    Perceived as 50% as Bright
    by Our Eyes
    Detected as 50% as Bright
    by the Camera

    Refer to the tutorial on the photoshop curves tool if you’re having trouble interpreting the graph.
    Accuracy of comparison depends on having a well-calibrated monitor set to a display gamma of 2.2.
    Actual perception will depend on viewing conditions, and may be affected by other nearby tones.
    For extremely dim scenes, such as under starlight, our eyes begin to see linearly like cameras do.

    Compared to a camera, we are much more sensitive to changes in dark tones than we are to similar changes in bright tones. There’s a biological reason for this peculiarity: it enables our vision to operate over a broader range of luminance. Otherwise the typical range in brightness we encounter outdoors would be too overwhelming.

    But how does all of this relate to gamma? In this case, gamma is what translates between our eye’s light sensitivity and that of the camera. When a digital image is saved, it’s therefore “gamma encoded” — so that twice the value in a file more closely corresponds to what we would perceive as being twice as bright.

    Technical Note: Gamma is defined by Vout = Vingamma , where Vout is the output luminance value and Vin is the input/actual luminance value. This formula causes the blue line above to curve. When gamma<1, the line arches upward, whereas the opposite occurs with gamma>1.

    2. Gamma encoded images store tones more efficiently. Since gamma encoding redistributes tonal levels closer to how our eyes perceive them, fewer bits are needed to describe a given tonal range. Otherwise, an excess of bits would be devoted to describe the brighter tones (where the camera is relatively more sensitive), and a shortage of bits would be left to describe the darker tones (where the camera is relatively less sensitive):

    Original: smooth 8-bit gradient (256 levels)
    Encoded using only 32 levels (5 bits)
    Linear
    Encoding:
    linearly encoded gradient
    Gamma
    Encoding:
    gamma encoded gradient

    Note: Above gamma encoded gradient shown using a standard value of 1/2.2
    See the tutorial on bit depth for a background on the relationship between levels and bits.

    Notice how the linear encoding uses insufficient levels to describe the dark tones — even though this leads to an excess of levels to describe the bright tones. On the other hand, the gamma encoded gradient distributes the tones roughly evenly across the entire range (“perceptually uniform”). This also ensures that subsequent image editing, color andhistograms are all based on natural, perceptually uniform tones.

    However, real-world images typically have at least 256 levels (8 bits), which is enough to make tones appear smooth and continuous in a print. If linear encoding were used instead, 8X as many levels (11 bits) would’ve been required to avoid image posterization.

    GAMMA WORKFLOW: ENCODING & CORRECTION

    Despite all of these benefits, gamma encoding adds a layer of complexity to the whole process of recording and displaying images. The next step is where most people get confused, so take this part slowly. A gamma encoded image has to have “gamma correction” applied when it is viewed — which effectively converts it back into light from the original scene. In other words, the purpose of gamma encoding is for recording the image — not for displaying the image. Fortunately this second step (the “display gamma”) is automatically performed by your monitor and video card. The following diagram illustrates how all of this fits together:

    RAW Camera Image is Saved as a JPEG File JPEG is Viewed on a Computer Monitor Net Effect
    image file gamma + display gamma = system gamma
    1. Image File Gamma 2. Display Gamma 3. System Gamma

    1. Depicts an image in the sRGB color space (which encodes using a gamma of approx. 1/2.2).
    2. Depicts a display gamma equal to the standard of 2.2

    1. Image Gamma. This is applied either by your camera or RAW development software whenever a captured image is converted into a standard JPEG or TIFF file. It redistributes native camera tonal levels into ones which are more perceptually uniform, thereby making the most efficient use of a given bit depth.

    2. Display Gamma. This refers to the net influence of your video card and display device, so it may in fact be comprised of several gammas. The main purpose of the display gamma is to compensate for a file’s gamma — thereby ensuring that the image isn’t unrealistically brightened when displayed on your screen. A higher display gamma results in a darker image with greater contrast.

    3. System Gamma. This represents the net effect of all gamma values that have been applied to an image, and is also referred to as the “viewing gamma.” For faithful reproduction of a scene, this should ideally be close to a straight line (gamma = 1.0). A straight line ensures that the input (the original scene) is the same as the output (the light displayed on your screen or in a print). However, the system gamma is sometimes set slightly greater than 1.0 in order to improve contrast. This can help compensate for limitations due to the dynamic range of a display device, or due to non-ideal viewing conditions and image flare.

    IMAGE FILE GAMMA

    The precise image gamma is usually specified by a color profile that is embedded within the file. Most image files use an encoding gamma of 1/2.2 (such as those using sRGB and Adobe RGB 1998 color), but the big exception is with RAW files, which use a linear gamma. However, RAW image viewers typically show these presuming a standard encoding gamma of 1/2.2, since they would otherwise appear too dark:

    linear RAWLinear RAW Image
    (image gamma = 1.0)
    gamma encoded sRGB imageGamma Encoded Image
    (image gamma = 1/2.2)

    If no color profile is embedded, then a standard gamma of 1/2.2 is usually assumed. Files without an embedded color profile typically include many PNG and GIF files, in addition to some JPEG images that were created using a “save for the web” setting.

    Technical Note on Camera Gamma. Most digital cameras record light linearly, so their gamma is assumed to be 1.0, but near the extreme shadows and highlights this may not hold true. In that case, the file gamma may represent a combination of the encoding gamma and the camera’s gamma. However, the camera’s gamma is usually negligible by comparison. Camera manufacturers might also apply subtle tonal curves, which can also impact a file’s gamma.

    DISPLAY GAMMA

    This is the gamma that you are controlling when you perform monitor calibration and adjust your contrast setting. Fortunately, the industry has converged on a standard display gamma of 2.2, so one doesn’t need to worry about the pros/cons of different values. Older macintosh computers used a display gamma of 1.8, which made non-mac images appear brighter relative to a typical PC, but this is no longer the case.

    Recall that the display gamma compensates for the image file’s gamma, and that the net result of this compensation is the system/overall gamma. For a standard gamma encoded image file (), changing the display gamma () will therefore have the following overall impact () on an image:

    gamma curves chart with a display gamma of 1.0
    Display Gamma 1.0 Gamma 1.0
    gamma curves chart with a display gamma of 1.8
    Display Gamma 1.8 Gamma 1.8
    gamma curves chart with a display gamma of 2.2
    Display Gamma 2.2 Gamma 2.2
    gamma curves chart with a display gamma of 4.0
    Display Gamma 4.0 Gamma 4.0

    Diagrams assume that your display has been calibrated to a standard gamma of 2.2.
    Recall from before that the image file gamma () plus the display gamma () equals the overall system gamma (). Also note how higher gamma values cause the red curve to bend downward.

    If you’re having trouble following the above charts, don’t despair! It’s a good idea to first have an understanding of how tonal curves impact image brightness and contrast. Otherwise you can just look at the portrait images for a qualitative understanding.

    How to interpret the charts. The first picture (far left) gets brightened substantially because the image gamma () is uncorrected by the display gamma (), resulting in an overall system gamma () that curves upward. In the second picture, the display gamma doesn’t fully correct for the image file gamma, resulting in an overall system gamma that still curves upward a little (and therefore still brightens the image slightly). In the third picture, the display gamma exactly corrects the image gamma, resulting in an overall linear system gamma. Finally, in the fourth picture the display gamma over-compensates for the image gamma, resulting in an overall system gamma that curves downward (thereby darkening the image).

    The overall display gamma is actually comprised of (i) the native monitor/LCD gamma and (ii) any gamma corrections applied within the display itself or by the video card. However, the effect of each is highly dependent on the type of display device.

    CRT Monitor LCD Monitor
    CRT Monitors LCD (Flat Panel) Monitors

    CRT Monitors. Due to an odd bit of engineering luck, the native gamma of a CRT is 2.5 — almost the inverse of our eyes. Values from a gamma-encoded file could therefore be sent straight to the screen and they would automatically be corrected and appear nearly OK. However, a small gamma correction of ~1/1.1 needs to be applied to achieve an overall display gamma of 2.2. This is usually already set by the manufacturer’s default settings, but can also be set during monitor calibration.

    LCD Monitors. LCD monitors weren’t so fortunate; ensuring an overall display gamma of 2.2 often requires substantial corrections, and they are also much less consistent than CRT’s. LCDs therefore require something called a look-up table (LUT) in order to ensure that input values are depicted using the intended display gamma (amongst other things). See the tutorial on monitor calibration: look-up tables for more on this topic.

    Technical Note: The display gamma can be a little confusing because this term is often used interchangeably with gamma correction, since it corrects for the file gamma. However, the values given for each are not always equivalent. Gamma correction is sometimes specified in terms of the encoding gamma that it aims to compensate for — not the actual gamma that is applied. For example, the actual gamma applied with a “gamma correction of 1.5” is often equal to 1/1.5, since a gamma of 1/1.5 cancels a gamma of 1.5 (1.5 * 1/1.5 = 1.0). A higher gamma correction value might therefore brighten the image (the opposite of a higher display gamma).

    OTHER NOTES & FURTHER READING

    Other important points and clarifications are listed below.

    • Dynamic Range. In addition to ensuring the efficient use of image data, gamma encoding also actually increases the recordable dynamic range for a given bit depth. Gamma can sometimes also help a display/printer manage its limited dynamic range (compared to the original scene) by improving image contrast.
    • Gamma Correction. The term “gamma correction” is really just a catch-all phrase for when gamma is applied to offset some other earlier gamma. One should therefore probably avoid using this term if the specific gamma type can be referred to instead.
    • Gamma Compression & Expansion. These terms refer to situations where the gamma being applied is less than or greater than one, respectively. A file gamma could therefore be considered gamma compression, whereas a display gamma could be considered gamma expansion.
    • Applicability. Strictly speaking, gamma refers to a tonal curve which follows a simple power law (where Vout = Vingamma), but it’s often used to describe other tonal curves. For example, the sRGB color space is actually linear at very low luminosity, but then follows a curve at higher luminosity values. Neither the curve nor the linear region follow a standard gamma power law, but the overall gamma is approximated as 2.2.
    • Is Gamma Required? No, linear gamma (RAW) images would still appear as our eyes saw them — but only if these images were shown on a linear gamma display. However, this would negate gamma’s ability to efficiently record tonal levels.

    For more on this topic, also visit the following tutorials:

     

  • Documentary Integrity and Truth

    Being there

    How will you operate as a photographer?

    Will you ask permission or will you be a fly on the wall,
    a ghost who never affects the image? This is a major question relevant to your production ethics.
    If you tell people what you’re doing, then they’ll react differently to you; they may be guarded or
    wary of how you’ll portray them.

    Photographers who lived with the communities they were photographing:

    • Chris Killip with the sea coal gatherers in the North East
      of England
    • Martin Parr with the people of Hebden Bridge in Yorkshire.
    • Bruce Davidson has done a similar thing in New York.

    This is very different to the approach of Garry Winogrand or Cartier-Bresson, neither of whom interfered or announced their presence. Winogrand operated like a ghost but got quite close to the action to produce his wide-angle views, whereas Cartier-Bresson
    remained peripheral, on the edge, trying not to be important to the subject.

    Truth
    This is a big one for the social documentary practitioner. The image has to have integrity – it has to be honest and factual in order to validate it for the viewer as an accurate portrayal. This is arguably where the boundary lies between social documentary and photojournalism where the image has more of an editorial purpose. In photojournalism the choice of photographer and the style of image-making will always have to suit the editorial nature of the publication and this is perhaps a bridge too far for the documentary photographer. Most documentary photographers would have no objection to any magazine or media publishing their documentary images provided that they were published as the photographer intended the viewer to see them, not cropped or enhanced.

    Kendall Walton ‘Transparent Pictures’

    ‘Ambiguities and discontinuities’  Berger & Mohr 1995 p91.

    Bazin

    For the first time, between the originating object and its reproduction there intervenes only the instrumentality of a non-living agent. For the first time an image of the world is formed automatically, without the creative intervention of man…in spite of any objections our critical spirit may offer, we are forced to accept as real the existence of the object reproduced, actually, re-presented…’

    The Ontology of the Photographic Image 1945

    Sekula

    If we accept the fundamental premise that information is the outcome of a culturally determined relationship, then we can no longer ascribe an intrinsic or universal meaning to the photographic image.

    On the Invention of Photographic Meaning 1997 p454

  • Reality and hyperreality

    This is a practice-based course so we won’t be going into detail about the nature of truth,
    hermeneutics, reality and hyperreality. What follows is a very brief summary. However you may
    want to research some of these areas for yourself. You could start by looking at some of the titles
    in the reading list at the end of this course guide.
    We all feel that we’re aware of reality but what is ‘real’? If you live in a desert region perhaps
    access to a tap with fresh clean water within five minutes walk is fantasy. This is not the world
    of the suburb where everyone has several taps of their own. Therefore one person’s reality and
    normality is not that of another. The freedom to travel is a reality for most of us, but not in
    all countries. As a tourist it’s possible to travel from region to region in Cuba but for a Cuban
    national it isn’t – unless you’ve got the correct paperwork. If we see a travel programme on
    TV we see a reality for the tourist, for the paying visitor, a valued source of national income.
    There may be many issues that are a reality of life in the countries we visit that we’re not aware
    of and wouldn’t like if we were. This is not to say that there is a judgment to be made by us.
    As a tourist with a completely different set of values that underpin your reality, how can you
    make such a judgment? This is where the photographer living within a society must start to
    consider the nature of the imagery that is produced. How will it portray the subject – in reality
    or hyperreality?
    Philosophers and critics talk about hyperreality where the human mind can’t distinguish reality
    from a simulation of reality. Hyperreality is what our consciousness defines as ‘real’ in a situation
    where media shape or filter an original event or experience. In other words, it’s ‘reality by proxy’.
    As photographers we need to understand where we are with the images we produce in terms
    of their reality. Is the image reality, a representation of reality or a simulation of reality? Jean
    Photography 2 Gesture and Meaning 47
    Baudrillard (1929–2007) was a French philosopher who addressed hyperreality and who talked
    about the nature of reality in terms of simulacra and simulation.
    The link below will take you to an interview that discusses simulacra and simulation. It talks
    about the destabilisation of the media and our ability to identify what’s real and what’s not real.
    www.youtube.com/watch?v=80osUvkFIzI
    In a nutshell, simulation is the process whereby representations of things come to replace
    the things being represented and indeed become more important than the real thing. At the
    extreme, you end up with a simulacrum which has no relation to reality. So an image may:
    • truly reflect of reality
    • mask and pervert reality
    • mask the absence of reality
    • bear no relation to any reality – it is its own simulacrum.
    We need to be careful to avoid simulation lest we engage with hyperreality as a reality without
    recognising its values.
    Jorge Louis Borges (1899–1986) in his work An Exactitude of Science (1946) describes
    hyperreality as “a condition in which ‘reality’ has been replaced by simulacra.” Borges felt that
    language had nothing to do with reality. Reality is a combination of perceptions, emotions,
    facts, feeling, whereas language is a series of structured rules that we need to obey to help
    others perceive our reality. The same is true of visual language.
    Baudrillard argues that today we only experience prepared realities – edited war footage,
    meaningless acts of terrorism, the Jerry Springer Show:
    “The very definition of the real has become: that of which it is possible to give
    an equivalent reproduction… The real is not only what can be reproduced,
    but that which is always already reproduced: that is the hyperreal… which
    is entirely in simulation. Illusion is no longer possible, because the real is no
    longer possible.”
    Baudrillard argues that we must attain an understanding of our state of perception and the
    message content that is communicated in terms of the images we construct. Even if we see it as
    a straight record of what we saw on the day, can this be a reality and, if so, for whom?
    48 Photography 2 Gesture and Meaning
    Documentary photographers are entering a new age with a new set of criteria. The issue of
    technical quality is not relevant. Images from mobile phones that capture the ‘moment’ will
    be printed by newspapers if the image tells the story. Where then is the need for a bag full of
    cameras and kit? In a new age of photography the documentarist will need to engage with the
    issue of hyperreality by establishing credibility, motivation and integrity. The reputation or name
    is then the status giver, the endorser of reality and truth to the images produced and offered to
    the media and the public.

  • Landscape and the City

    !!To be developed with documentary

    Since the very beginning of photography, the city has provided opportunities for the photographer: landscape and other subject matter.

    Detachment

    Daguerre’s. ‘View boulevard du temple’. First example of photograph of a person. Only rendered because he must have remained relatively still to have his shoes shined.

    Talbot’s views of Paris.

    “The images of Paris remain passive and mute, and establish not so much the tourist eye-view, hungry for sights to record, as one that was looking for things to record… his London images, for example Nelson’s Column (1843), keep the city at a distance and follow the eye in its way within the urban world.”
    (Clarke, 1997, p.77)

    Eugene Atget

    Social documentary

    John Thomson Street Life in London

    Jacob Riis How the other half lives.

    Brassai

    Cities within cities

    A recurring line of investigation is that of the city, not just as one complete interconnecting  unit, but layers of different cities within cities. Sometimes these elements are briefly exposed to one another, but often they are designed to restrain their inhabitants from uncomfortable contact with each other. Eg film In Time.

    Paul Seawright.    Invisible Cities.

     

    1.9: Visual research and analysis – social contrasts

  • Territorial Photography

    Read Snyder’s essay ‘Territorial Photography’ from W.J.T Mitchell ed 2002 Landscape and Power. Summarise Snyder’s key points.

    Snyder’s argument:

    In the 1830s and 1840s photography was dominated by wealthy upper class photographers trained within a fine art tradition. Their work was personal and intended for a small audience as there were no means for mass production. It aimed to express the photographer’s feelings towards the landscape depicted.

    But from the mid to late 1850s a growing middle class clientele created a large and definable market for landscape photographs. This coincided with/was a motor for development of mass production techniques. As the costs of photography reduced, the photographers themselves increasingly came from the middle classes. Prints were increasingly sold by print houses near to sites of tourist interest, and other commercial outlets.

    This led to a change in the understanding and techniques of photography itself. Middle class clients were interested in technological progress and wanted photographs that looked machine made with high degree of detail. Tendency towards glossy sepia. Photographs are seen to be disinterested reports.

    Issue: ‘how to make a picture that was resolutely photographic yet, at the same time, beautiful or stunning …but that nevertheless could be convincingly experienced as aconventional, a product of scientific laws and photographic craft.’ 

    Carleton Watkins      Google Images

    ‘entrepreneur whose job was to record pre-existing scenes in a thoroughly disinterested manner’

    Watkins seeks to harmonise landscape with industrial progress – grand, sublime and quiet – a West American Eden. He produced large 20″ x 24″ negatives of very detailed views of Yosemite Park, Pacific Coast and sparsely settled areas of Utah and Nevada. The images of Yosemite emphasise accessibility and grandeur. Those of the Pacific coast depict it as unspoiled and unspoilable. There is no questioning  of whose land it was, or what happened to the earlier native American settlers on the land.

    William Henry Jackson  Google images

    also picturesque-sublime. Reconfirm beliefs about America landscape as a ‘scene of potential habitations, acculturation and exploitation’.

    Timothy O-Sullivan                               Google images

    O’Sullivan was not subject to commercial pressures.  His pictures were antipicturesque showing the West as ‘a boundless place of isolation, of contrasts of blinding light and deep, impenetrable shadows’ – a ‘bleak, inhospitable land, god-foresaken, anaesthetising landscape’.  Despite their detail, his images are not primarily scientific and ‘objective’, but often carefully constructed eg recording his own foot prints and carefully selecting particular elements of a view. Figures are present but have no clear role. He was rejected in his time till rediscovered by Ansel Adams in 1939.

    Next, find and evaluate two photographs by any of the photographers Snyder mentions, but not specific examples that he addresses in the essay. Your evaluation (up to 250 words for each) should reflect some of the points that Snyder makes, as well as any other references. (Both the images below are under Creative Commons license)

    Watkins Evaluation of Cathedral Rocks

    This image makes the tock fill the frame, placing it on the centre line and looking upwards. The light is soft and inviting to a gentle mysterious mistiness at the summit. Shadow lines are also soft – rather like an ancient bone that adds the mystery of weathered age. The trees are quite large and invite the viewer to look up – as if inviting to climb. The curve of the trees around the bottom of the image suggest possibilities of a hidden way up behind the mountain.

    O’Sullivan  South Side of Inscription Rock

    File:Timothy O'Sullivan, South side of Inscription Rock, New Mexico, 1873.jpg

    This image of a similar subject is altogether more stark and forbidding. The rock is placed off-centre, showing the flat landscape beyond. The light is harsh with defined shadows to emphasise the sharp razor-like lines in the rock. It is absolutely clear there is no way up. The sky and sepia colour are burning hot – like a scene on the moon. The shrubs and small trees in the foreground are dwarfed – but note the semi-cropped larger tree on the left.

  • Semiotics Signifier Signified

    (to be looked at again when I study Barthes in my Illustration course)

    TASK

    Find an advert from a magazine, newspaper or the internet, which has some clearly identifiable signs. Using the example above to help you, list the signs.

    What are the signifiers? What is signified? Read:

    Roland Barthes essay ‘Rhetoric of the Image’ (1977) 

    to help clarify your understanding of these principles. You might find Barthes hard going at first, but please persevere. The way in which meaning is constructed in an image is directly relevant to photographic practice.

    For an example of the dissection of an advert by

    Marlboro Men

    Motoring advertising provides some of the most spectacular imagery: vehicles are often anthropomorphised – heroic against the elements or nimble on ice or intelligently negotiating streets and saving the family. See:

    There have also been some superb parodies of ad campaigns(including Marlboro) as well as ingenious campaigns for not-for-profit organisations, see:

  • Taking Portraits

    If you have access to the relevant equipment, imagine that you have been asked by a client to take a fairly formal portrait photograph – for example a graduation portrait. (Commercial photographers take hundreds of these in a day at graduation ceremonies.)

    The main point of this exercise is to get to grips with studio lighting so experiment with your lighting effects and make notes in
    your learning log or blog.

    The next two projects return to a historical theme but also give you the opportunity to explore different styles of portrait photography for yourself.

    Keep an eye on the composition too, though. Remember some ‘rules’ for general portraiture:
    • When you’re composing an image, generally keep the eyes in the top third of the image unless you specifically require a different effect.
    • Don’t be afraid to come in close or go out to include background.
    • Remember the rule of thirds and frame the subject to be the point of attention.
    (Look on the internet to remind yourself about the rule of thirds and its application to photographic composition if you need to. This is a rule taken straight from classical art.)

    If you haven’t got the necessary equipment to attempt the practical exercise, contact a local photographic studio and ask if you can spend some time there watching how they work. If you explain that you’re a student on a degree level photography course they may be only too happy to show you what they know – and you’ll get some inside information on the merits and demerits of various types of equipment. This would be a valuable experience even if you do have your own studio lighting.

  • Journeys

    ‘Going North’

    ‘Going North’ montage

    For this assignment I wanted to explore a journey that was likely to be ‘unpicturesque’ – one that was characteristic of many journeys on busy roads through rather boring countryside. I travel a lot for my work – currently in Africa and so have photographed many ‘journeys’ – and am planning to review these in a review of ideas about ‘safari’ for Assignment 4. But for this assignment, based on the discussions in the course materials and my work on ‘Christmas 2014’, I wanted to focus closer to home and try something a bit different.  I have become interested in some of the ideas from ‘New Topographics’Lewis Baltz‘s aim to look for things that were most unremarkable, presenting them in as unremarkable a way as possible to ‘appear objective’ and not show a point of view. Also to show how we use and construct the landscape to make our busy lives more convenient. I am also interested in the different effects of different ways of making the image – should they be sharp and studied with a political message as in Nadav Kander‘s work on China and Paul Graham‘s Great North Road, or deliberately blurred and out of focus to convey a subjective mood as in Robert Frank’s Americans and Chris Coekin‘s work, or even more uncontrolled as in mobile phone images.

    I started by experimenting a bit with my mobile phone on train  journeys. Building on some earlier images of train journeys in London (see Docklands Journey). I used my iPhone to take images of the train journey home from London to Cambridge as we went past open fields, suburban allotments and warehouses and included some images of the passengers (See London to Cambridge) On this iPhone it is not possible using the normal camera to control shutter speed – the focus is on the actual image. These gave me very much a feeling of movement and going through the suburban landscape. I also like some of the reflections in the windows. But an aesthetic I need to think about a lot more – what actually creates an effect/mood and what is just snapshot and what exactly am I trying to convey – the passage of time, isolation of commuters, ordinariness of countryside, specific landmarks of human intervention or maybe something new and less cliche?

    I also experimented with walking – the idea of a disturbing inward journey. On a walk along the River Holme I photographed light and human made objects and litter along the way. (See HolmeWalk  images) I was interested in how some of these things became quite disturbing – footprints in mud, hanging ropes like dead birds, electricity boxes like nestboxes. Plastic tubes like underground snakes. I photographed in colour and then converted to high contrast black and white in Lightroom, but without any further Black and White manipulation because I wanted the images to have an element of accident, not too contrived. I am planning to use these images in my Book Design course to illustrate the poem ‘Jabberwocky’ as a scary fable. This type of approach is also something I want to explore further – inspired by some of the images of Japanese photographers like Daido Moriyama and Hiroshi Sugimoto.

    For the actual assignment on this course, I chose a two and a half hour journey up the A1 from Cambridge to Barnsley to take my assessment materials to OCA. I had experimented with shutter speed and lenses on my various journeys in Africa. In order to maintain the more ‘objective’ and random element and also not to spend all my time fiddling with camera settings to free me to focus on the image itself, I decided to take two series each using a different lens and format,  but within each series using the same approach and then decide which series is most interesting and select the images:

    1) Going North – my 28mm wide angle fixed lens in order to give me the widest choice of composition, including some dramatic distortion, in landscape format. Using shutter speed priority on a fast shutter speed of 1600.

    2) Going South – my 100mm telephoto in portrait format to give a much flatter image and using a slower shutter speed.

    I looked at the route on Google Street View in advance. But as the road is extremely long and the interest in my images would be from traffic and events along the route rather than ‘views’ I did not pursue this area of exploration far. I used a standard map during the journey itself.

    I was interested in using photography as a way of exploring and discovering the road, rather than shooting to a predetermined formula (following photographers like Kander and Baltz). I took over 1000 images going up and 500 coming back because it was very difficult to predict any ‘decisive moment’ so I shot images at frequent intervals whenever I saw anything potentially interesting and/or characteristic of a particular stretch of road. I found the wide angle fast shutter speed images much more interesting – partly but not only because of the interestingly dreadful heavy rain for most of the journey and the feeling of potential risk that this added. Though it was difficult to actually shoot near collisions without provoking them! The dramatic gloomy sky was also a constant until very nearly at the end of the journey, framing things like water towers and pylons. The images themselves are mostly very sharp and some are quite like those of Paul Graham, rather than Coekin.

    The big challenge was then what to make of all the images and ways in which the photographic process had made me constantly aware of new things. I started by thinking of selecting 12 images as a montage for 12 stages of the journey (See the different pages of square thumbnails in Going North on Zemniimages website). But I was not sure if that would be cheating. If I was to select 12 images I still needed to think exactly what I was trying to say – choosing deadpan images that showed the sameness of much of the countryside? or the dramatic cloud breaks? or the awful traffic?

    In the end as I reread the material for this part of the course, I decided to experiment with typological ideas. I had started to notice the many different signs – particularly industrial estates trying to sound rural like ‘Honeypot Lane’, ‘historic market towns’ and the variety of traffic signs. I could have taken this typological approach to start with, but then the images could have become too studied and I would have stopped the feeling of journey as exploration. In the end, looking through what I found to be some of the most interesting images I noted that they were often based on colour. So I thought of doing a post-selection of red vehicles – an after-the-event  I-Spy red cars  game used to keep children happy on a boring journey.

    I think the images are best displayed as a montage of square images so that they can be seen all at once. For a slideshow I would have selected rather different images that told a clearer narrative or anti-narrative. I could have cropped and processed the images in a more considered montage with aesthetic dynamic diagonals and abstract colour patterns. But I think that would have negated the rather random nature of the images and my ambivalent feeling about the journey. On the one hand it was pretty long, tiring and at many stretches boring. The greys and plastics of much of the ‘architecture’ and the feeling of so many people busily going somewhere but nowhere special? a bit depressing for my picturesque sensitivities. At the same time I had found the photographic process added a frisson of interest of the chase and spotting new things – more than just I-Spy. And basically that is just how life is much of the time – random, fractured, disordered and much the same. If this series manages to convey that rather than the somewhat more romantic movement of some of my earlier train journey images then it has achieved much of its purpose.

    The Brief

    Produce a series of approximately 12 photographs that are made on, or explore the idea of, a journey. The journey that you document may be as long or as short as you like. You may choose to reexamine a familiar route, such as a commute to work or another routine activity, or it may be a journey into unfamiliar territory. You may travel by any means available.
    Introduce your work with a supporting text (around 500 words) that:
    • Describes how you interpreted this brief.
    • Describes how your work relates to aspects of photography and visual culture addressed in Part Two.
    • Evaluates the strengths and weaknesses of your work, describing what you would have done differently or how you might develop this work further.
    • Identifies what technical choices you made to help communicate your ideas, and also references relevant artists and photographers who have influenced the creative direction of your project.
    • Explains your reasons for selecting particular views, and arriving at certain visual outcomes.

    Reflection
    Just a reminder to look at the assessment criteria again. Think about how well you have done against the criteria and make notes in your learning log.

    Link to preliminary ideas about your critical review (Assignment Four)

    Link to ‘Transitions’ task (Assignment Six).

     

  • Street Photography Methods

    Shooting from the hip

    Take some time out to develop the technique of shooting very quickly. You’ll probably produce
    some very blurred and even disastrous images, but fortunately mistakes aren’t as expensive
    in the digital age as they were when Winogrand was working the New York streets.
    Produce a set of eight images that demonstrate the life and vibrancy of city living. If you
    don’t live anywhere near a city, choose a spot or a day when there’s a lot going on – the
    busier the better. If you need to, re-read the safety advice in the Introduction to this course
    guide.
    Analyse and reflect on your final images in your learning log or blog:
    • What makes the successful images work well?
    • What difficulties did you experience?
    • How do you feel about this type of work? Is it honest? Are your images a truthful
    representation or did you edit the truth in some way, consciously or sub-consciously?

    Outdoor Portraits

    On a bright day photograph a person in the sunshine. Do this outdoors in three stages.

    1. First, photograph them with your back to the sun. Write down the exposures and look at the issues involved in getting the person’s expression and pose right.

    2. Next, photograph the person with their back to the sun. Write down the exposures. If you can, look at the highlights and the shadow readings given from the spot metering. If you can’t set spot metering then you can still get some indication of the difference in
    brightness. Look to see if there’s burn-out – over-exposure of the hair or black shadow areas without any light at all. The contrast ratio will be very high – what do you think it is?

    3. Take the photograph again but this time use a reflector. Write down the reading that you get from the face and the contrast level.
    A reflector can be aluminum metal foil over a cardboard box. It can be a bed sheet, it can be a fancy professional item; it can be big (2m2) or small (30cm diameter), square or round – it makes no difference. What does make a difference is the quality of the light coming off it.
    To prove this, repeat the third shot with silver (kitchen foil on a board) and then white and note the difference. Put your reflective material on a stand or get someone to hold it for you. In practice, of course, you can use reflectors that already exist – a white wall, for example. Keep your eyes open for possible surfaces.

    • Analyse the differences between the images you’ve made in relation to the exposure you’ve used. Write up your findings with images in your learning log or blog.
    • How much difference has the reflector made?
    • Is there evidence of hard or soft light in this exercise?
    Hollywood-style

    What you do for this exercise will depend on whether you have access to a full set of studio lights and a studio to use them in. If you haven’t, don’t worry – go straight to Part B.

    Part A: If you have lights
    You’re going to create your own version of Hurrell’s Bogart image. Choose someone who might fit the bill, borrow some props (or look in a charity shop), then take a good look at the Humphrey Bogart image and the lighting diagram again.
    Set up the lighting as shown in the diagram. You need a good distance to the background, perhaps about 3m. It should be Black. It is Black when it meters up 5 stops below the face reading for the exposure and not before; at 4 stops, it’s dark grey, at 2 stops it’s light grey.
    Remember you need plenty of space to avoid light bouncing about where it shouldn’t be.

    For your next project you’ll move forward 20 or 30 years to look at the photographic portrait of the 60s, exemplified by the work of David Bailey.
    The light at the back is quite high, coming down on the sitter and washing over the shoulders. Note the shadow on top of the hat front side and the highlight along the top edge and also the light falling over the shoulder in front. All this tells you that the light’s quite high, probably 1m above the sitter. The fill dish indicated is a beauty dish with diffuser. It is straight on here as a fill light.
    Look at the shadow on the collar – that’s what you’re looking to reproduce. The black board indicated is to stop light going directly into the camera lens as this will cause flare and degrade the image. The light front left has a grille on it to stop the light straying around; it puts more light on the front rear shoulder and the left side of the hat image.
    You may need to place a little white card at the opening to bounce the light down a little.
    The light should be above head height. You’ll need a small table lamp with a soft light as the fill light and, if possible, a small off-camera flash gun to provide the left-hand light. This puts more light on the front rear shoulder and the coat area.

    Part B: Without lights
    There are plenty of Hollywood images to choose from. Pick one that you enjoy and can re-create using window light and a small amount of bounced flash. You’ll need a suitable model, preferably someone who is prepared to enter into the spirit of the task!
    The first thing to do is analyse the lighting. Where is it coming from? Look for the deepest shadows for the main light. Then make your own lighting diagram and produce a Hollywoodstyle set of portrait images.
    Produce three images in black and white – 10×12” or A4 at 320 dpi.

  • What is a Photographer?

    Marius de Zayas (1880-1961) closely allied to the 291 gallery. Photography and Photography and Artistic Photography first published Camera Work no 41 (1913)

    de Zayas makes a dichotomous distinction between:

    the ‘artist photographer’ who tries to represent something from within themselves as a ‘systematic and personal representation’ that then applies this to study of external form – an example being Steichen.

    Steichen images from Google

    ‘photographers’ who try to represent external reality based on ‘free and external’ investigation and research, bringing these different objective elements together into one communicated image – an example being Stieglitz.

    Stieglitz images from Google

    I see this distinction as continuing to be valid, but more in terms of a continuum than a dichotomy. As a researcher I am very well aware that practically all research (even scientific research but particularly social research) is inevitably informed by subjective perspectives on the important questions to ask, how to ask them and how to analyse the responses. In addition there are considerable individual as well as cultural variations even in eg perception of colour and shape. On the other hand pure abstraction and subjectivity is also nearly impossible as our thought processes are dependent and in many ways determined by external experience.

    The extremes of the continuum have arguably become further apart as digital imaging has significantly broadened the possibilities for artist photographers and technological advances have enabled possibilities for reproducing a greater range of tones and colours to represent and objectively calibrate (eg through use of histograms) to what the photographer concludes as ‘external reality’.

    A further element that does not come into de Zayas’ framework is the response of the viewer and the degree to which anticipated responses of different audiences affect both the investigation of external reality and ways of communicating subjectivity. Digital media offer interesting new possibilities for photographer/viewer interactivity.

    ————————————–

    key quotes

    ‘Photography is not Art, but photographs can be made to be Art…

    …The difference between Photography and Artistic-Photography is that, in the former, man [sic!!!!] tries to get at that objectivity of Form which generates the different conceptions that man has of Form, while the second uses the objectivity of Form to express a preconceived idea in order to convey an emotion. The first is the fixing of an actual state of Form, the other is the representation of the objectivity of Form, subordinated to a system of representation. The first is a process of indignation, the second a means of expression. In the first, man tries to represent something that is outside of himself; in the second he tries to represent something that is in himself. The first is a free and impersonal research, the second is a systematic and personal representation.
    The artist photographer uses nature to express his individuality, the photographer puts himself in front of nature, and without preconceptions, with the free mind of an investigator, with the method of an experimentalist, tries to get out of her a true state of conditions…

    Up to the present, the highest point of these two sides of Photography has been reached by Steichen as an artist and by Stieglitz as an experimentalist.
    The work of Steichen brought to its highest expression the aim of the realistic painting of Form. In his photographs he has succeeded in expressing the perfect fusion of the subject and the object. He has carried to its highest point the expression of a system of representation: the realistic one.
    Stieglitz has begun with the elimination of the subject in represented Form to search for the pure expression of the object. He is trying to do synthetically, with the means of a mechanical process, what some of the most advanced artists of the modern movement are trying to do analytically with the means of Art.

    ????I am not sure I understand this.

    It would be difficult to say which of these two sides of Photography is the more important. For one is the means by which man fuses his idea with the natural expression of Form, while the other is the means by which man tries to bring the natural expression of Form to the cognition of the mind.