Author: lindamayoux

  • ‘Late’ Photography

    TASK

    1. Read David Campany’s essay ‘Safety in Numbness’ (see ‘Online learning materials and student-led research’ at the start of this course guide). Summarise the key points of the essay and note down your own observations on the points he raises.

    ‘”There is a sense in which the late photograph, in all its silence, can easily flatter the ideological paralysis of those who gaze at it without the social or political will to make sense of its circumstance…If the banal matter-of-factness of the late photograph can fill us with a sense of the sublime, it is imperative that we think through why this might be. There is a fine line between the banal and the sublime, and it is a political line.” Campany p 192.

    Campany reaches this conclusion first through a discussion of Meyerowitz’s photographs Aftermath and the BBC documentary in ‘Reflections on Ground Zero’.

    [wpdevart_youtube]A8hN-aNWWBE[/wpdevart_youtube]

    He questions the trend towards ‘photographing the aftermath of events – traces, fragments, empty buildings, empty streets, damage to the body and damage to the world’ and the way it has come to be prevalent in photojournalism as a response to the overwhelming use of video now as a record of unfolding events. Partly due also to the fact that photographers are now rarely allowed access to conflict sites – unlike for example in VietNam. One could add also since Campany’s 2003 article the ubiquitous use of mobile video phones by people involved in events now able to upload them almost instantaneously as a more ‘democratic’ and immediate (if often shakily filmed) perspective and record on what is happening.

    He argues that it is the stillness of late photography that gives it its power – more memorable than events on the move. While its privileged status may be imagined to stem from a natural capacity to condense and simplify things, the effects of the still image derive much more from its capacity to remain open. It is that openness that can be paralysing ‘In its apparent finitude and muteness it can leave us in permanent limbo, suspending even the need for analysis and bolstering a kind of liberal melancholy that shuns political explanation.’

    2. Look at some of Meyerowitz’s images available online from Aftermath: World Trade Centre Archive (2006). Consider how these images differ from your own memories of the news footage and other images of the time. Write a short response to the work (around 300 words), noting what value you feel this ‘late’ approach has.

    Meyerowitz’s images were taken as the officially sanctioned record of the impact of the attack, partly as a memorial for the relatives but also a historical record. Their monumental ‘sublime beauty’ in the colours and the large cinematic format are apocalyptic – resonant of the paintings of artists like John Martin and Turner. Like much other ‘late photography’ there are few people. Those that are there are dwarfed by the enormity of the buildings, machinery and the hell volcanic fires. Meyerowitz claims that he did not make the images ‘I was told how to photograph it by the thing itself‘. As Campany points out, that means that he is not questioning his own background and assumptions that inevitably underlie his photographic skills and practice.

    His approach is very different from that of another on-site photographer – a policeman John Bott. His images have more people in, and are more participatory social documentary of the clear-up activities.

    John Botte’s photos of Ground Zero

    Unlike Meyerowitz he did not get official permission – as he had done a lot of photography as part of his police work he had been asked to photograph the clear-up work by his boss. This was now leading to various legal complications. His health was seriously damaged by the photography work and he did not profit from the photos he took – proceeds being given to charity.

    [wpdevart_youtube]vp5Zi16IRDg[/wpdevart_youtube]

    I agree with Campany that the monumental aesthetic beauty in Meyerowitz’s images seems to anaesthetise and paralyse any political questioning of why the event occurred, and whether and why that particular type of historical record was needed. I was in India conducting an NGO workshop at the time of the attacks and first saw the news with colleagues. On the one hand they had all been through much more serious natural disasters – Gujarat Earthquake and periodically severe monsoon floods – where many more thousands of people had been killed both by the disaster itself, and then lack of emergency aid in the follow-up. Much of that unreported in the Western press. On the other hand, in the light of the ongoing war in Afghanistan, there was also high anti-American feeling. It was only when I got back two weeks later that I saw images at home.

    I think that the apocalyptic nature seem to almost glorify the unintended martyrdom of the victims – matched by praise of the way in which the survivors lay their mourning images then get up and move on. They raise no questioning of events in the countries from which Al Quaeda perpetrators come, including but not only American actions, and how the conflict can really be resolved. To me they look very much like images I saw on the TV in Sudan some years later with the US ‘shock and awe’ opening of the Iraq War. But very different from the interviews with people on Al Jazeera Arabic channel of the impact. That time in video footage, some of it live.

    This lack of questioning is not however inherent in ‘late photography’, but in the selection of the effects one photographs and their contextualisation in other images or documents that might portray a multiplicity of complex perspectives – even where a clear message in unlikely to be appropriate of effective.

     
    ————————————-

    !!To be updated from Landscape Photography

    In his 2003 essay, David Campany comments that:

    “One might easily surmise that photography has of late inherited a major role as undertaker, summariser or accountant. It turns up late, wanders through the places where things have happened totting up the effects of the world’s activity.” (‘Safety in Numbness: Some remarks on the problem of “Late Photography”’ (in Campany (ed.), 2007)
    This ‘aftermath’ approach dates back to the war photographers of the American Civil War and the Crimean War (1853–56), because of technological limitations of the time. Because of the large plate cameras and slow emulsions, it was not possible to photograph actual combat. Their images focused instead on portraits of soldiers, camp scenes and the aftermath of battles and skirmishes. Their images could not yet be reproduced en masse in the illustrated press, but some of these photographs were used as the basis for woodcut engravings for publications such as The Illustrated London News and Harper’s Weekly.

    Although technology today makes it possible – though still difficult –  to capture the heat of war and atrocities, this is not necessarily the most effective way of portraying the horrors of violence.
    Examples of photographers using the ‘late’ approach in contemporary landscape include:

    • Joel Meyerowitz’s Aftermath images of Ground Zero in New York
    • Richard Misrach ‘s images of the American Desert show the aftermath of human activity but in a beautified distilled large format.
    • Sophie Ristelhueber ‘s aerial images of the Afghan conflict show the scars left on the landscape
    • Paul Seawright Hidden cold ‘objective’ images of battle sites and minefields in Afghanistan
    • Willie Doherty made very evocative images of the left detritus from conflicts during the Troubles and in the present day.

    Other photographers have focused on the precursors – the tension in anticipation of violence.  “not the ‘theatre of war’ but its rehearsal studio” (Campany, 2008, p.46). :

    • An-My Lê’s (to do) series 29 Palms (2004) documents US marine training manoeuvres at a range used to prepare soldiers ahead of deployment in Afghanistan and Iraq.
    • Broomberg and Oliver Chanarin in Chicago (2005) (to do) examine an Israeli military training ground
    • Paul Shambroom’s project Security (2003−07) studied the simulated training sites that are used by the US emergency services and Department of Homeland Security, nicknamed ‘Disaster City’ and ‘Terror Town’.
    • Sarah Pickering in UK has photographed training grounds for the fire and police service. Her images contain no people, aiming to seem like a film set ready for the action.

    See Post on Landscape Photography blog: 3.3: ‘Late Photography’

  • Colour Photography: Styles and Creativity

    Reality/surreality/hyperreality. Mechanical vs art. Looking back from digital colour and high levels of control. Capturing images is now so easy. And possibilities of control at shooting and processing stages so broad. Often lose the aesthetics and meaning.

    Early Colour BBC 1974 overview of early colour photographers and techniques: tinting, gum bichromate, oil process, 3-colour process and autochrome.
    George Eastman Museum 2014. Pigment processes: carbon prints and gum bichromate prints were developed in the 1850s and offer superior permanence and control of the appearance of the final print and are still used today.
    George Eastman Museum 2014. History of development of colour processes from tinting to chromogenic film processes of 1970s.

    Early photography: pictorialism to modernism

    Early colour photography processes produce a feeling of nostalgia for a by-gone leisurely time. As in monochrome photography this ‘elite impressionist aesthetic’ can be enhanced through for example use of chiaroscuro and light, smearing vaseline on the lens, adding brushstrokes or scratches to the film in development process. In colour photography particularly the aesthetic is also partly because of inherent technical limitations of early equipment and processes:

    • Lens aberrations and distortions in perspective
    • Chemicals were unstable, inconsistent and less sensitive leading to colour shifts, grain, limited tonality and dynamic range and requiring long exposure times and hence shallow depth of field and blurring. Effect of long exposures while model tries to be still so get selective movement blur? Giving the reflective feel?
    • Fragile plates and scratches that add to the feeling of human frailty and inevitable passage of time.
    • Edges of the plates? burning and fade?

    Processes like hand-colouring and tinting, coupled with the blurriness of the original black and white image give a de-saturated dreamy look. The leisurely feel is enhanced by the very long exposures needed to produce multiple plates in different colours that are then combined. Photographing any action was not possible, and requires shallow depth of field with much of the image dreamily blurred. Grain, scratches and other imperfections are further exaggerated with fragility of glass plates and the nature of pigments and chemicals used.

    Colour photography techniques
    • hand colouring of black and white prints
    • monochrome tinting through use of dyes and pigments at the development stage: cyanotypes, carbon prints and gum bichromate prints. They use pigments and bichromated colloids (viscous substances like gelatin or albumen made light-sensitive by adding a bichromate) that harden when exposed to light and become insoluble in water. The resulting prints are characterized by broad tones and soft detail, sometimes resembling paintings or drawings.
    • oil process
    • 3-colour process
    • Autochrome 1907-1935: 3 colour process using potato starch. Soft focus, pointillist grain. Slow process if you want to keep exposures under control.
    Colour photographs from 1907: Autochrome and Pictorialism. Ted Forbes 2015 as part as part of his You Tube Art of Photography series. Discusses autochrome process in the context of other early processes, debates on colour photography as art and how we interpret early colour photographs from our current digital perspective. Book Impressionist Photography.
    2018 John Thornton and Don Camera: Is pictorialism dead? Looks at the artistic inspira
    Debbi Richard 2009 Two short clips from a PBS documentary titled: “American Photography: A Century of Images.” Paul Strand’s straight photography started to re-establish the primacy of black and white as ‘serious’ photography with an emphasis on minimum artifice and attention to tonal abstraction and shapes.
    Alfred Steiglitz
    Alfred Stieglitz overview of his monochrome work and life, showing his pictorialist art style.
    Heinrich Kuhn pictorialism
    Overview of Kuhn’s life and work. Ted Forbes 2014 as part of You Tube The Art of Photography series. Interesting discussion of early colour techniques in the context of camera clubs and their debates about colour photography. Detailed discussion of technical challenges of lenses and unstable chemicals and how Kuhn addressed these through scientific experiment and composition to make very evocative images.
    Based on book Heinrich Kuhn: The Perfect Photograph
    Edward Steichen
    Overview of Steichen’s colour and black and white work, including early landscapes. Ted Forbes 2011 as part of You Tube The Art of Photography series. Based on book ‘Steichen’s Legacy’. use of moody low key landscapes. In figure studies takes out facial information to create intensity, drama and mystery. And use of abstraction with harsh lighting to produce patterns. Reduction of the image to just the information needed. Humour in shadows.
    Heinrich Kuhn autochrome technique
    Neue Galerie New York 2012. Gives a very detailed overview of the autochrome process. Priority of lighting and backlighting to give luminosity coupled with the fragility of the plates. He experimented with colour patches, aiming at being able to apply colour patches like a painter.
    Kuhn, Steiglitz and Steichen
    Neue Galerie New York 2012. Dr Monika Faber discusses exhibition and book: “Heinrich Kuehn and His American Circle: Alfred Stieglitz and Edward Steichen”. Shows more of his tinted photographs and landscape.
    Paul Strand modernism

    The Art of Photography 2014 modernist photography using the power of the image to create social awareness. Book: Paul Strand: Sixty Years of Photographs (Aperture) http://www.amazon.com/gp/product/0912…

    Colour film photography: 1970s to contemporary

    Overview focusing on era 1970s onwards by Ted Forbes 2013 as part of his You Tube Art of Photography series.
    Discusses use of autochrome process in travel photographs by National Geographic.
    William Eggleston in 1970s was the person who brought colour photography as respectable fine art.
    Saul Leider work was rediscovered in 1990s uses abstraction and faded quality.
    Fernando Schiana not high contrast
    Auri Gerscht uses splashes of colour in desaturated background.
    Dan Winters contemporary muted portraits.
    Colours are still not accurate, but that gives a retro- nostalgic feel. Use of colour as part of the composition at time of shooting. White balance is not accurate.
    William Eggleston
    Saul Leiter
    Joel Meyerowitz

    see also: Stephen Shore

    https://illustration.zemniimages.info/inspiration-stephen-shore
    Luigi Ghirri

    Martin Parr

    https://illustration.zemniimages.info/inspiration-martin-parr

    Digital Styles

    Lomography

    Lomography is a genre of photography, involving taking spontaneous photographs with minimal attention to technical details. Lomographic images often exploit unpredictable non-standard optical traits of cheap toy camera (such as light leaks and irregular lens alignment), and non-standard film processing techniques, for aesthetic effect.

    Lomography is named after the Soviet-era 35 mm LOMO LC-A Compact Automat camera cameras produced by the state-run optics manufacturer Leningradskoye Optiko-Mekhanicheskoye Obyedinenie (LOMO) PLC of Saint Petersburg. This camera was loosely based upon the Cosina CX-1 and introduced in the early 1980s. In 1992 the Lomographic Society International was founded as an art movement by a group of Viennese students interested in the LC-A camera and who put on exhibitions of photos. The art movement then developed into the Austrian company Lomographische AG, a commercial enterprise who claimed “Lomography” as a commercial trademark.

    See their website: https://www.lomography.com

    But lomography is now a genericized trademark referring to the general style that can be produced with any cheap plastic toy camera using film. Similar-looking techniques can be achieved with digital photography. Many camera phone photo editor apps include a “lomo” filter. It is also possible to achieve the effect on any digital photograph through processing in software like Adobe Photoshop, Lightroom or Analog FX Pro. The lomography trend peaked in 2011.

    Because of its ease of use, it has been used in participatory photographic activism because it is easy to use eg by children in slums of Nairobi.

    Grunge effects

    Starts with a lot of work in Lightroom before adding blur and other effects in Photoshop.
    Again produces different line and filter overlays.
    Uses blur, high pass and HDR filter effects on a very diffuse original image.
    Creates a very impactful black and white version to use instead of highpass filter overlay. And produces multiple versions.

    Cinematic effects

    Excellent overview. Introduces different cinema looks. Covers curve adjustment layers on channels, pros and cons of LUTs and how to use the phototoning gradient maps.
    Uses solid colour adjustment layers in different opactities for hoghlights, ahadows and midtones using blend if. Goves more control than opacity maps.
    Uses LUTs and blend if
    Uses moody vignettes.
  • Colour Management

     Sources

    Cambridge in Colour: Colour Management and Printing series

    Underlying concepts and principles: Human Perception; Bit DepthBasics of digital cameras: pixels

    Color Management from camera to display Part 1: Concept and Overview; Part 2: Color Spaces; Part 3: Color Space Conversion; Understanding Gamma Correction

    Bit Depth

    Every color pixel in a digital image is created through some combination of the three primary colors: red, green, and blue –  often referred to as a “color channel”. Bit depth quantifies how many unique colors are available in an image’s color palette in terms of the number of 0’s and 1’s, or “bits,” which are used to specify each color channel (bpc) or per pixel (bpp). Images with higher bit depths can encode more shades or colors – or intensity of values -since there are more combinations of 0’s and 1’s available.

    Most color images from digital cameras have 8-bits per channel and so they can use a total of eight 0’s and 1’s. This allows for 28 or 256 different combinations—translating into 256 different intensity values for each primary color. When all three primary colors are combined at each pixel, this allows for as many as 28*3 or 16,777,216 different colors, or “true color.” This is referred to as 24 bits per pixel since each pixel is composed of three 8-bit color channels. The number of colors available for any X-bit image is just 2X if X refers to the bits per pixel and 23X if X refers to the bits per channel. The following table illustrates different image types in terms of bits (bit depth), total colors available, and common names.

    Bits Per Pixel Number of Colors Available Common Name(s)
    1 2 Monochrome
    2 4 CGA
    4 16 EGA
    8 256 VGA
    16 65536 XGA, High Color
    24 16777216 SVGA, True Color
    32 16777216 + Transparency  
    48 281 Trillion  
    USEFUL TIPS
    • The human eye can only discern about 10 million different colors, so saving an image in any more than 24 bpp is excessive if the only intended purpose is for viewing. On the other hand, images with more than 24 bpp are still quite useful since they hold up better under post-processing (see “Posterization Tutorial“).
    • Color gradations in images with less than 8-bits per color channel can be clearly seen in the image histogram.
    • The available bit depth settings depend on the file type. Standard JPEG and TIFF files can only use 8-bits and 16-bits per channel, respectively.

    BASICS OF DIGITAL CAMERA PIXELS

    The continuous advance of digital camera technology can be quite confusing because new terms are constantly being introduced. This tutorial aims to clear up some of this digital pixel confusion — particularly for those who are either considering or have just purchased their first digital camera. Concepts such as sensor size, megapixels, dithering and print size are discussed.

     

    OVERVIEW OF COLOR MANAGEMENT

    “Color management” is a process where the color characteristics for every device in the imaging chain is known precisely and utilized in color reproduction. It often occurs behind the scenes and doesn’t require any intervention, but when color problems arise, understanding this process can be critical.

    In digital photography, this imaging chain usually starts with the camera and concludes with the final print, and may include a display device in between:

    digital imaging chain

    Many other imaging chains exist, but in general, any device which attempts to reproduce color can benefit from color management. For example, with photography it is often critical that your prints or online gallery appear how they were intended. Color management cannot guarantee identical color reproduction, as this is rarely possible, but it can at least give you more control over any changes which may occur.

    THE NEED FOR PROFILES & REFERENCE COLORS

    Color reproduction has a fundamental problem: a given “color number” doesn’t necessarily produce the same color in all devices. We use an example of spiciness to convey both why this creates a problem, and how it is managed.

    Let’s say that you’re at a restaurant and are about to order a spicy dish. Although you enjoy spiciness, your taste buds are quite sensitive, so you want to be careful that you specify a pleasurable amount. The dilemma is this: simply saying “medium” might convey one level of spice to a cook in Thailand, and a completely different level to someone from England. Restaurants could standardize this based on the number of peppers included in the dish, but this alone wouldn’t be sufficient. Spice also depends on how sensitive the taster is to each pepper:

    calibration table

    To solve your spiciness dilemma, you could undergo a one-time taste test where you eat a series of dishes, with each containing slightly more peppers (shown above). You could then create a personalized table to carry with you at restaurants which specifies that 3 equals “mild,” 5 equals “medium,” and so on (assuming that all peppers are the same). Next time, when you visit a restaurant and say “medium,” the waiter could look at your personal table and translate this into a standardized concentration of peppers. This waiter could then go to the cook and say to make the dish “extra mild,” knowing all too well what this concentration of peppers would actually mean to the cook.

    As a whole, this process involved (1) characterizing each person’s sensitivity to spice, (2)standardizing this spice based on a concentration of peppers and (3) being able to collectively use this information to translate the “medium” value from one person into an “extra mild” value for another. These same three principles are used to manage color.

    COLOR PROFILES

    A device’s color response is characterized similar to how the personalized spiciness table was created in the above example. Various numbers are sent to this device, and its output is measured in each instance:

    Input Number (Green)   Output Color
    Device 1 Device 2
    200    
    150    
    100    
    50    

    Real-world color profiles include all three colors, more values, and are usually more sophisticated than the above table — but the same core principles apply. However, just as with the spiciness example, a profile on its own is insufficient. These profiles have to be recorded in relation to standardized reference colors, and you need color-aware software that can use these profiles to translate color between devices.

    COLOR MANAGEMENT OVERVIEW

    Putting it all together, the following diagram shows how these concepts might apply when converting color between a display device and a printer:

    display device   printer output device
    Characterized
    Input Device
      Standardized
    Profile Connection Space
      Characterized
    Output Device
    Additive RGB Colors
    RGB
    Color Profile
    (color space)
    CMM Translation CMM Translation Subtractive CMYK Colors
    CMYK
    Color Profile
    (color space)
    1. Characterize. Every color-managed device requires a personalized table, or “color profile,” which characterizes the color response of that particular device.
    2. Standardize. Each color profile describes these colors relative to a standardized set of reference colors (the “Profile Connection Space”).
    3. Translate. Color-managed software then uses these standardized profiles to translate color from one device to another. This is usually performed by a color management module (CMM).

    The above color management system was standardized by the International Color Consortium (ICC), and is now used in most computers. It involves several key concepts: color profiles (discussed above), color spaces, and translation between color spaces.

    Color Space. This is just a way of referring to the collection of colors/shades that are described by a particular color profile. Put another way, it describes the set of all realizable color combinations. Color spaces are therefore useful tools for understanding the color compatibility between two different devices. See the tutorial on color spaces for more on this topic.

    Profile Connection Space (PCS). This is a color space that serves as a standardized reference (a “reference space”), since it is independent of any particular device’s characteristics. The PCS is usually the set of all visible colors defined by the Commission International de l’éclairage (CIE) and used by the ICC.

    Note: The thin trapezoidal region drawn within the PCS is what is called a “working space.” The working space is used in image editing programs (such as Adobe Photoshop), and defines the subset of colors available to work with when performing any image editing.

    Color Translation. The color management module (CMM) is the workhorse of color management, and is what performs all the calculations needed to translate from one color space into another. Contrary to previous examples, this is rarely a clean and simple process. For example, what if the printer weren’t capable of producing as intense a color as the display device? This is called a “gamut mismatch,” and would mean that accurate reproduction is impossible. In such cases the CMM therefore just has to aim for the best approximation that it can. See the tutorial on color space conversion for more on this topic.

    UNDERSTANDING GAMMA CORRECTION

    Gamma is an important but seldom understood characteristic of virtually all digital imaging systems. It defines the relationship between a pixel’s numerical value and its actual luminance. Without gamma, shades captured by digital cameras wouldn’t appear as they did to our eyes (on a standard monitor). It’s also referred to as gamma correction, gamma encoding or gamma compression, but these all refer to a similar concept. Understanding how gamma works can improve one’s exposure technique, in addition to helping one make the most of image editing.

    WHY GAMMA IS USEFUL

    1. Our eyes do not perceive light the way cameras do. With a digital camera, when twice the number of photons hit the sensor, it receives twice the signal (a “linear” relationship). Pretty logical, right? That’s not how our eyes work. Instead, we perceive twice the light as being only a fraction brighter — and increasingly so for higher light intensities (a “nonlinear” relationship).

    linear vs nonlinear gamma - cameras vs human eyes  
    Reference Tone
    Perceived as 50% as Bright
    by Our Eyes
    Detected as 50% as Bright
    by the Camera

    Refer to the tutorial on the photoshop curves tool if you’re having trouble interpreting the graph.
    Accuracy of comparison depends on having a well-calibrated monitor set to a display gamma of 2.2.
    Actual perception will depend on viewing conditions, and may be affected by other nearby tones.
    For extremely dim scenes, such as under starlight, our eyes begin to see linearly like cameras do.

    Compared to a camera, we are much more sensitive to changes in dark tones than we are to similar changes in bright tones. There’s a biological reason for this peculiarity: it enables our vision to operate over a broader range of luminance. Otherwise the typical range in brightness we encounter outdoors would be too overwhelming.

    But how does all of this relate to gamma? In this case, gamma is what translates between our eye’s light sensitivity and that of the camera. When a digital image is saved, it’s therefore “gamma encoded” — so that twice the value in a file more closely corresponds to what we would perceive as being twice as bright.

    Technical Note: Gamma is defined by Vout = Vingamma , where Vout is the output luminance value and Vin is the input/actual luminance value. This formula causes the blue line above to curve. When gamma<1, the line arches upward, whereas the opposite occurs with gamma>1.

    2. Gamma encoded images store tones more efficiently. Since gamma encoding redistributes tonal levels closer to how our eyes perceive them, fewer bits are needed to describe a given tonal range. Otherwise, an excess of bits would be devoted to describe the brighter tones (where the camera is relatively more sensitive), and a shortage of bits would be left to describe the darker tones (where the camera is relatively less sensitive):

    Original: smooth 8-bit gradient (256 levels)
      Encoded using only 32 levels (5 bits)
    Linear
    Encoding:
    linearly encoded gradient
    Gamma
    Encoding:
    gamma encoded gradient

    Note: Above gamma encoded gradient shown using a standard value of 1/2.2
    See the tutorial on bit depth for a background on the relationship between levels and bits.

    Notice how the linear encoding uses insufficient levels to describe the dark tones — even though this leads to an excess of levels to describe the bright tones. On the other hand, the gamma encoded gradient distributes the tones roughly evenly across the entire range (“perceptually uniform”). This also ensures that subsequent image editing, color andhistograms are all based on natural, perceptually uniform tones.

    However, real-world images typically have at least 256 levels (8 bits), which is enough to make tones appear smooth and continuous in a print. If linear encoding were used instead, 8X as many levels (11 bits) would’ve been required to avoid image posterization.

    GAMMA WORKFLOW: ENCODING & CORRECTION

    Despite all of these benefits, gamma encoding adds a layer of complexity to the whole process of recording and displaying images. The next step is where most people get confused, so take this part slowly. A gamma encoded image has to have “gamma correction” applied when it is viewed — which effectively converts it back into light from the original scene. In other words, the purpose of gamma encoding is for recording the image — not for displaying the image. Fortunately this second step (the “display gamma”) is automatically performed by your monitor and video card. The following diagram illustrates how all of this fits together:

    RAW Camera Image is Saved as a JPEG File   JPEG is Viewed on a Computer Monitor   Net Effect
    image file gamma + display gamma = system gamma
    1. Image File Gamma   2. Display Gamma   3. System Gamma

    1. Depicts an image in the sRGB color space (which encodes using a gamma of approx. 1/2.2).
    2. Depicts a display gamma equal to the standard of 2.2

    1. Image Gamma. This is applied either by your camera or RAW development software whenever a captured image is converted into a standard JPEG or TIFF file. It redistributes native camera tonal levels into ones which are more perceptually uniform, thereby making the most efficient use of a given bit depth.

    2. Display Gamma. This refers to the net influence of your video card and display device, so it may in fact be comprised of several gammas. The main purpose of the display gamma is to compensate for a file’s gamma — thereby ensuring that the image isn’t unrealistically brightened when displayed on your screen. A higher display gamma results in a darker image with greater contrast.

    3. System Gamma. This represents the net effect of all gamma values that have been applied to an image, and is also referred to as the “viewing gamma.” For faithful reproduction of a scene, this should ideally be close to a straight line (gamma = 1.0). A straight line ensures that the input (the original scene) is the same as the output (the light displayed on your screen or in a print). However, the system gamma is sometimes set slightly greater than 1.0 in order to improve contrast. This can help compensate for limitations due to the dynamic range of a display device, or due to non-ideal viewing conditions and image flare.

    IMAGE FILE GAMMA

    The precise image gamma is usually specified by a color profile that is embedded within the file. Most image files use an encoding gamma of 1/2.2 (such as those using sRGB and Adobe RGB 1998 color), but the big exception is with RAW files, which use a linear gamma. However, RAW image viewers typically show these presuming a standard encoding gamma of 1/2.2, since they would otherwise appear too dark:

    linear RAWLinear RAW Image
    (image gamma = 1.0)
    gamma encoded sRGB imageGamma Encoded Image
    (image gamma = 1/2.2)

    If no color profile is embedded, then a standard gamma of 1/2.2 is usually assumed. Files without an embedded color profile typically include many PNG and GIF files, in addition to some JPEG images that were created using a “save for the web” setting.

    Technical Note on Camera Gamma. Most digital cameras record light linearly, so their gamma is assumed to be 1.0, but near the extreme shadows and highlights this may not hold true. In that case, the file gamma may represent a combination of the encoding gamma and the camera’s gamma. However, the camera’s gamma is usually negligible by comparison. Camera manufacturers might also apply subtle tonal curves, which can also impact a file’s gamma.

    DISPLAY GAMMA

    This is the gamma that you are controlling when you perform monitor calibration and adjust your contrast setting. Fortunately, the industry has converged on a standard display gamma of 2.2, so one doesn’t need to worry about the pros/cons of different values. Older macintosh computers used a display gamma of 1.8, which made non-mac images appear brighter relative to a typical PC, but this is no longer the case.

    Recall that the display gamma compensates for the image file’s gamma, and that the net result of this compensation is the system/overall gamma. For a standard gamma encoded image file (), changing the display gamma () will therefore have the following overall impact () on an image:

    gamma curves chart with a display gamma of 1.0
    Display Gamma 1.0 Gamma 1.0
    gamma curves chart with a display gamma of 1.8
    Display Gamma 1.8 Gamma 1.8
    gamma curves chart with a display gamma of 2.2
    Display Gamma 2.2 Gamma 2.2
    gamma curves chart with a display gamma of 4.0
    Display Gamma 4.0 Gamma 4.0

    Diagrams assume that your display has been calibrated to a standard gamma of 2.2.
    Recall from before that the image file gamma () plus the display gamma () equals the overall system gamma (). Also note how higher gamma values cause the red curve to bend downward.

    If you’re having trouble following the above charts, don’t despair! It’s a good idea to first have an understanding of how tonal curves impact image brightness and contrast. Otherwise you can just look at the portrait images for a qualitative understanding.

    How to interpret the charts. The first picture (far left) gets brightened substantially because the image gamma () is uncorrected by the display gamma (), resulting in an overall system gamma () that curves upward. In the second picture, the display gamma doesn’t fully correct for the image file gamma, resulting in an overall system gamma that still curves upward a little (and therefore still brightens the image slightly). In the third picture, the display gamma exactly corrects the image gamma, resulting in an overall linear system gamma. Finally, in the fourth picture the display gamma over-compensates for the image gamma, resulting in an overall system gamma that curves downward (thereby darkening the image).

    The overall display gamma is actually comprised of (i) the native monitor/LCD gamma and (ii) any gamma corrections applied within the display itself or by the video card. However, the effect of each is highly dependent on the type of display device.

    CRT Monitor   LCD Monitor
    CRT Monitors LCD (Flat Panel) Monitors

    CRT Monitors. Due to an odd bit of engineering luck, the native gamma of a CRT is 2.5 — almost the inverse of our eyes. Values from a gamma-encoded file could therefore be sent straight to the screen and they would automatically be corrected and appear nearly OK. However, a small gamma correction of ~1/1.1 needs to be applied to achieve an overall display gamma of 2.2. This is usually already set by the manufacturer’s default settings, but can also be set during monitor calibration.

    LCD Monitors. LCD monitors weren’t so fortunate; ensuring an overall display gamma of 2.2 often requires substantial corrections, and they are also much less consistent than CRT’s. LCDs therefore require something called a look-up table (LUT) in order to ensure that input values are depicted using the intended display gamma (amongst other things). See the tutorial on monitor calibration: look-up tables for more on this topic.

    Technical Note: The display gamma can be a little confusing because this term is often used interchangeably with gamma correction, since it corrects for the file gamma. However, the values given for each are not always equivalent. Gamma correction is sometimes specified in terms of the encoding gamma that it aims to compensate for — not the actual gamma that is applied. For example, the actual gamma applied with a “gamma correction of 1.5” is often equal to 1/1.5, since a gamma of 1/1.5 cancels a gamma of 1.5 (1.5 * 1/1.5 = 1.0). A higher gamma correction value might therefore brighten the image (the opposite of a higher display gamma).

    OTHER NOTES & FURTHER READING

    Other important points and clarifications are listed below.

    • Dynamic Range. In addition to ensuring the efficient use of image data, gamma encoding also actually increases the recordable dynamic range for a given bit depth. Gamma can sometimes also help a display/printer manage its limited dynamic range (compared to the original scene) by improving image contrast.
    • Gamma Correction. The term “gamma correction” is really just a catch-all phrase for when gamma is applied to offset some other earlier gamma. One should therefore probably avoid using this term if the specific gamma type can be referred to instead.
    • Gamma Compression & Expansion. These terms refer to situations where the gamma being applied is less than or greater than one, respectively. A file gamma could therefore be considered gamma compression, whereas a display gamma could be considered gamma expansion.
    • Applicability. Strictly speaking, gamma refers to a tonal curve which follows a simple power law (where Vout = Vingamma), but it’s often used to describe other tonal curves. For example, the sRGB color space is actually linear at very low luminosity, but then follows a curve at higher luminosity values. Neither the curve nor the linear region follow a standard gamma power law, but the overall gamma is approximated as 2.2.
    • Is Gamma Required? No, linear gamma (RAW) images would still appear as our eyes saw them — but only if these images were shown on a linear gamma display. However, this would negate gamma’s ability to efficiently record tonal levels.

    For more on this topic, also visit the following tutorials:

    !!

    Color can only exist when three components are present: a viewer, an object, and light. Although pure white light is perceived as colorless, it actually contains all colors in the visible spectrum. When white light hits an object, it selectively blocks some colors and reflects others; only the reflected colors contribute to the viewer’s perception of color.

    Prism: White Light and the Visible Spectrum
    Human Vision

    The human eye senses this spectrum using a combination of rod and cone cells for vision. Rod cells are better for low-light vision, but can only sense the intensity of light, whereas whilecone cells can also discern color, they function best in bright light.

    Three types of cone cells exist in your eye, with each being more sensitive to either short (S), medium (M), or long (L) wavelength light. The set of signals possible at all three cone cells describes the range of colors we can see with our eyes. The diagram below illustrates the relative sensitivity of each type of cell for the entire visible spectrum. These curves are often also referred to as the “tristimulus functions.”

    Select View: Cone Cells Luminosity



    Raw data courtesy of the Colour and Vision Research Laboratories (CVRL), UCL.

    Note how each type of cell does not just sense one color, but instead has varying degrees of sensitivity across a broad range of wavelengths. Move your mouse over “luminosity” to see which colors contribute the most towards our perception of brightness. Also note how human color perception is most sensitive to light in the yellow-green region of the spectrum; this is utilized by the bayer array in modern digital cameras.

    ADDITIVE & SUBTRACTIVE COLOR MIXING

    Virtually all our visible colors can be produced by utilizing some combination of the three primary colors, either by additive or subtractive processes. Additive processes create color by adding light to a dark background, whereas subtractive processes use pigments or dyes to selectively block white light. A proper understanding of each of these processes creates the basis for understanding color reproduction.

    Additive Primary ColorsAdditive
    Subtractive Primary ColorsSubtractive

    The color in the three outer circles are termed primary colors, and are different in each of the above diagrams. Devices which use these primary colors can produce the maximum range of color. Monitors release light to produce additive colors, whereas printers use pigments or dyes to absorb light and create subtractive colors. This is why nearly all monitors use a combination of red, green and blue (RGB) pixels, whereas most color printers use at least cyan, magenta and yellow (CMY) inks. Many printers also include black ink in addition to cyan, magenta and yellow (CMYK) because CMY alone cannot produce deep enough shadows.

    Additive Color Mixing
    (RGB Color)
      Subtractive Color Mixing
    (CMYK Color)
    Red + Green Yellow Cyan + Magenta Blue
    Green + Blue Cyan Magenta + Yellow Red
    Blue + Red Magenta Yellow + Cyan Green
    Red + Green + Blue White Cyan + Magenta + Yellow Black

    Subtractive processes are more susceptible to changes in ambient light, because this light is what becomes selectively blocked to produce all their colors. This is why printed color processes require a specific type of ambient lighting in order to accurately depict colors.

    COLOR PROPERTIES: HUE & SATURATION

    Color has two unique components that set it apart from achromatic light: hue and saturation. Visually describing a color based on each of these terms can be highly subjective, however each can be more objectively illustrated by inspecting the light’s color spectrum.

    Naturally occurring colors are not just light at one wavelength, but actually contain a whole range of wavelengths. A color’s “hue” describes which wavelength appears to be most dominant. The object whose spectrum is shown below would likely be perceived as bluish, even though it contains wavelengths throughout the spectrum.

    Color Hue
    Visible Spectrum

    Although this spectrum’s maximum happens to occur in the same region as the object’s hue, it is not a requirement. If this object instead had separate and pronounced peaks in just the the red and green regions, then its hue would instead be yellow (see the additive color mixing table).

    A color’s saturation is a measure of its purity. A highly saturated color will contain a very narrow set of wavelengths and appear much more pronounced than a similar, but less saturated color. The following example illustrates the spectrum for both a highly saturated and less saturated shade of blue.

    Select Saturation Level: Low High

    Spectral Curves for Low and High Saturation Color

    —————————————————-
    To be reviewed and updated together with work on Book Design 1.

     Sources

    Cambridge in Colour: Colour Management and Printing series

    Underlying concepts and principles: Human Perception; Bit DepthBasics of digital cameras: pixels

    Color Management from camera to display Part 1: Concept and Overview; Part 2: Color Spaces; Part 3: Color Space Conversion; Understanding Gamma Correction

    Bit Depth

    Every color pixel in a digital image is created through some combination of the three primary colors: red, green, and blue –  often referred to as a “color channel”. Bit depth quantifies how many unique colors are available in an image’s color palette in terms of the number of 0’s and 1’s, or “bits,” which are used to specify each color channel (bpc) or per pixel (bpp). Images with higher bit depths can encode more shades or colors – or intensity of values -since there are more combinations of 0’s and 1’s available.

    Most color images from digital cameras have 8-bits per channel and so they can use a total of eight 0’s and 1’s. This allows for 28 or 256 different combinations—translating into 256 different intensity values for each primary color. When all three primary colors are combined at each pixel, this allows for as many as 28*3 or 16,777,216 different colors, or “true color.” This is referred to as 24 bits per pixel since each pixel is composed of three 8-bit color channels. The number of colors available for any X-bit image is just 2X if X refers to the bits per pixel and 23X if X refers to the bits per channel. The following table illustrates different image types in terms of bits (bit depth), total colors available, and common names.

    Bits Per Pixel Number of Colors Available Common Name(s)
    1 2 Monochrome
    2 4 CGA
    4 16 EGA
    8 256 VGA
    16 65536 XGA, High Color
    24 16777216 SVGA, True Color
    32 16777216 + Transparency
    48 281 Trillion
    USEFUL TIPS
    • The human eye can only discern about 10 million different colors, so saving an image in any more than 24 bpp is excessive if the only intended purpose is for viewing. On the other hand, images with more than 24 bpp are still quite useful since they hold up better under post-processing (see “Posterization Tutorial“).
    • Color gradations in images with less than 8-bits per color channel can be clearly seen in the image histogram.
    • The available bit depth settings depend on the file type. Standard JPEG and TIFF files can only use 8-bits and 16-bits per channel, respectively.

    BASICS OF DIGITAL CAMERA PIXELS

    The continuous advance of digital camera technology can be quite confusing because new terms are constantly being introduced. This tutorial aims to clear up some of this digital pixel confusion — particularly for those who are either considering or have just purchased their first digital camera. Concepts such as sensor size, megapixels, dithering and print size are discussed.

     OVERVIEW OF COLOR MANAGEMENT

    “Color management” is a process where the color characteristics for every device in the imaging chain is known precisely and utilized in color reproduction. It often occurs behind the scenes and doesn’t require any intervention, but when color problems arise, understanding this process can be critical.

    In digital photography, this imaging chain usually starts with the camera and concludes with the final print, and may include a display device in between:

    digital imaging chain

    Many other imaging chains exist, but in general, any device which attempts to reproduce color can benefit from color management. For example, with photography it is often critical that your prints or online gallery appear how they were intended. Color management cannot guarantee identical color reproduction, as this is rarely possible, but it can at least give you more control over any changes which may occur.

    THE NEED FOR PROFILES & REFERENCE COLORS

    Color reproduction has a fundamental problem: a given “color number” doesn’t necessarily produce the same color in all devices. We use an example of spiciness to convey both why this creates a problem, and how it is managed.

    Let’s say that you’re at a restaurant and are about to order a spicy dish. Although you enjoy spiciness, your taste buds are quite sensitive, so you want to be careful that you specify a pleasurable amount. The dilemma is this: simply saying “medium” might convey one level of spice to a cook in Thailand, and a completely different level to someone from England. Restaurants could standardize this based on the number of peppers included in the dish, but this alone wouldn’t be sufficient. Spice also depends on how sensitive the taster is to each pepper:

    calibration table

    To solve your spiciness dilemma, you could undergo a one-time taste test where you eat a series of dishes, with each containing slightly more peppers (shown above). You could then create a personalized table to carry with you at restaurants which specifies that 3 equals “mild,” 5 equals “medium,” and so on (assuming that all peppers are the same). Next time, when you visit a restaurant and say “medium,” the waiter could look at your personal table and translate this into a standardized concentration of peppers. This waiter could then go to the cook and say to make the dish “extra mild,” knowing all too well what this concentration of peppers would actually mean to the cook.

    As a whole, this process involved (1) characterizing each person’s sensitivity to spice, (2)standardizing this spice based on a concentration of peppers and (3) being able to collectively use this information to translate the “medium” value from one person into an “extra mild” value for another. These same three principles are used to manage color.

    COLOR PROFILES

    A device’s color response is characterized similar to how the personalized spiciness table was created in the above example. Various numbers are sent to this device, and its output is measured in each instance:

    Input Number (Green) Output Color
    Device 1 Device 2
    200
    150
    100
    50

    Real-world color profiles include all three colors, more values, and are usually more sophisticated than the above table — but the same core principles apply. However, just as with the spiciness example, a profile on its own is insufficient. These profiles have to be recorded in relation to standardized reference colors, and you need color-aware software that can use these profiles to translate color between devices.

    COLOR MANAGEMENT OVERVIEW

    Putting it all together, the following diagram shows how these concepts might apply when converting color between a display device and a printer:

    display device printer output device
    Characterized
    Input Device
    Standardized
    Profile Connection Space
    Characterized
    Output Device
    Additive RGB Colors
    RGB
    Color Profile
    (color space)
    CMM Translation CMM Translation Subtractive CMYK Colors
    CMYK
    Color Profile
    (color space)
    1. Characterize. Every color-managed device requires a personalized table, or “color profile,” which characterizes the color response of that particular device.
    2. Standardize. Each color profile describes these colors relative to a standardized set of reference colors (the “Profile Connection Space”).
    3. Translate. Color-managed software then uses these standardized profiles to translate color from one device to another. This is usually performed by a color management module (CMM).

    The above color management system was standardized by the International Color Consortium (ICC), and is now used in most computers. It involves several key concepts: color profiles (discussed above), color spaces, and translation between color spaces.

    Color Space. This is just a way of referring to the collection of colors/shades that are described by a particular color profile. Put another way, it describes the set of all realizable color combinations. Color spaces are therefore useful tools for understanding the color compatibility between two different devices. See the tutorial on color spaces for more on this topic.

    Profile Connection Space (PCS). This is a color space that serves as a standardized reference (a “reference space”), since it is independent of any particular device’s characteristics. The PCS is usually the set of all visible colors defined by the Commission International de l’éclairage (CIE) and used by the ICC.

    Note: The thin trapezoidal region drawn within the PCS is what is called a “working space.” The working space is used in image editing programs (such as Adobe Photoshop), and defines the subset of colors available to work with when performing any image editing.

    Color Translation. The color management module (CMM) is the workhorse of color management, and is what performs all the calculations needed to translate from one color space into another. Contrary to previous examples, this is rarely a clean and simple process. For example, what if the printer weren’t capable of producing as intense a color as the display device? This is called a “gamut mismatch,” and would mean that accurate reproduction is impossible. In such cases the CMM therefore just has to aim for the best approximation that it can. See the tutorial on color space conversion for more on this topic.

    UNDERSTANDING GAMMA CORRECTION

    Gamma is an important but seldom understood characteristic of virtually all digital imaging systems. It defines the relationship between a pixel’s numerical value and its actual luminance. Without gamma, shades captured by digital cameras wouldn’t appear as they did to our eyes (on a standard monitor). It’s also referred to as gamma correction, gamma encoding or gamma compression, but these all refer to a similar concept. Understanding how gamma works can improve one’s exposure technique, in addition to helping one make the most of image editing.

    WHY GAMMA IS USEFUL

    1. Our eyes do not perceive light the way cameras do. With a digital camera, when twice the number of photons hit the sensor, it receives twice the signal (a “linear” relationship). Pretty logical, right? That’s not how our eyes work. Instead, we perceive twice the light as being only a fraction brighter — and increasingly so for higher light intensities (a “nonlinear” relationship).

    linear vs nonlinear gamma - cameras vs human eyes
    Reference Tone
    Perceived as 50% as Bright
    by Our Eyes
    Detected as 50% as Bright
    by the Camera

    Refer to the tutorial on the photoshop curves tool if you’re having trouble interpreting the graph.
    Accuracy of comparison depends on having a well-calibrated monitor set to a display gamma of 2.2.
    Actual perception will depend on viewing conditions, and may be affected by other nearby tones.
    For extremely dim scenes, such as under starlight, our eyes begin to see linearly like cameras do.

    Compared to a camera, we are much more sensitive to changes in dark tones than we are to similar changes in bright tones. There’s a biological reason for this peculiarity: it enables our vision to operate over a broader range of luminance. Otherwise the typical range in brightness we encounter outdoors would be too overwhelming.

    But how does all of this relate to gamma? In this case, gamma is what translates between our eye’s light sensitivity and that of the camera. When a digital image is saved, it’s therefore “gamma encoded” — so that twice the value in a file more closely corresponds to what we would perceive as being twice as bright.

    Technical Note: Gamma is defined by Vout = Vingamma , where Vout is the output luminance value and Vin is the input/actual luminance value. This formula causes the blue line above to curve. When gamma<1, the line arches upward, whereas the opposite occurs with gamma>1.

    2. Gamma encoded images store tones more efficiently. Since gamma encoding redistributes tonal levels closer to how our eyes perceive them, fewer bits are needed to describe a given tonal range. Otherwise, an excess of bits would be devoted to describe the brighter tones (where the camera is relatively more sensitive), and a shortage of bits would be left to describe the darker tones (where the camera is relatively less sensitive):

    Original: smooth 8-bit gradient (256 levels)
    Encoded using only 32 levels (5 bits)
    Linear
    Encoding:
    linearly encoded gradient
    Gamma
    Encoding:
    gamma encoded gradient

    Note: Above gamma encoded gradient shown using a standard value of 1/2.2
    See the tutorial on bit depth for a background on the relationship between levels and bits.

    Notice how the linear encoding uses insufficient levels to describe the dark tones — even though this leads to an excess of levels to describe the bright tones. On the other hand, the gamma encoded gradient distributes the tones roughly evenly across the entire range (“perceptually uniform”). This also ensures that subsequent image editing, color andhistograms are all based on natural, perceptually uniform tones.

    However, real-world images typically have at least 256 levels (8 bits), which is enough to make tones appear smooth and continuous in a print. If linear encoding were used instead, 8X as many levels (11 bits) would’ve been required to avoid image posterization.

    GAMMA WORKFLOW: ENCODING & CORRECTION

    Despite all of these benefits, gamma encoding adds a layer of complexity to the whole process of recording and displaying images. The next step is where most people get confused, so take this part slowly. A gamma encoded image has to have “gamma correction” applied when it is viewed — which effectively converts it back into light from the original scene. In other words, the purpose of gamma encoding is for recording the image — not for displaying the image. Fortunately this second step (the “display gamma”) is automatically performed by your monitor and video card. The following diagram illustrates how all of this fits together:

    RAW Camera Image is Saved as a JPEG File JPEG is Viewed on a Computer Monitor Net Effect
    image file gamma + display gamma = system gamma
    1. Image File Gamma 2. Display Gamma 3. System Gamma

    1. Depicts an image in the sRGB color space (which encodes using a gamma of approx. 1/2.2).
    2. Depicts a display gamma equal to the standard of 2.2

    1. Image Gamma. This is applied either by your camera or RAW development software whenever a captured image is converted into a standard JPEG or TIFF file. It redistributes native camera tonal levels into ones which are more perceptually uniform, thereby making the most efficient use of a given bit depth.

    2. Display Gamma. This refers to the net influence of your video card and display device, so it may in fact be comprised of several gammas. The main purpose of the display gamma is to compensate for a file’s gamma — thereby ensuring that the image isn’t unrealistically brightened when displayed on your screen. A higher display gamma results in a darker image with greater contrast.

    3. System Gamma. This represents the net effect of all gamma values that have been applied to an image, and is also referred to as the “viewing gamma.” For faithful reproduction of a scene, this should ideally be close to a straight line (gamma = 1.0). A straight line ensures that the input (the original scene) is the same as the output (the light displayed on your screen or in a print). However, the system gamma is sometimes set slightly greater than 1.0 in order to improve contrast. This can help compensate for limitations due to the dynamic range of a display device, or due to non-ideal viewing conditions and image flare.

    IMAGE FILE GAMMA

    The precise image gamma is usually specified by a color profile that is embedded within the file. Most image files use an encoding gamma of 1/2.2 (such as those using sRGB and Adobe RGB 1998 color), but the big exception is with RAW files, which use a linear gamma. However, RAW image viewers typically show these presuming a standard encoding gamma of 1/2.2, since they would otherwise appear too dark:

    linear RAWLinear RAW Image
    (image gamma = 1.0)
    gamma encoded sRGB imageGamma Encoded Image
    (image gamma = 1/2.2)

    If no color profile is embedded, then a standard gamma of 1/2.2 is usually assumed. Files without an embedded color profile typically include many PNG and GIF files, in addition to some JPEG images that were created using a “save for the web” setting.

    Technical Note on Camera Gamma. Most digital cameras record light linearly, so their gamma is assumed to be 1.0, but near the extreme shadows and highlights this may not hold true. In that case, the file gamma may represent a combination of the encoding gamma and the camera’s gamma. However, the camera’s gamma is usually negligible by comparison. Camera manufacturers might also apply subtle tonal curves, which can also impact a file’s gamma.

    DISPLAY GAMMA

    This is the gamma that you are controlling when you perform monitor calibration and adjust your contrast setting. Fortunately, the industry has converged on a standard display gamma of 2.2, so one doesn’t need to worry about the pros/cons of different values. Older macintosh computers used a display gamma of 1.8, which made non-mac images appear brighter relative to a typical PC, but this is no longer the case.

    Recall that the display gamma compensates for the image file’s gamma, and that the net result of this compensation is the system/overall gamma. For a standard gamma encoded image file (), changing the display gamma () will therefore have the following overall impact () on an image:

    gamma curves chart with a display gamma of 1.0
    Display Gamma 1.0 Gamma 1.0
    gamma curves chart with a display gamma of 1.8
    Display Gamma 1.8 Gamma 1.8
    gamma curves chart with a display gamma of 2.2
    Display Gamma 2.2 Gamma 2.2
    gamma curves chart with a display gamma of 4.0
    Display Gamma 4.0 Gamma 4.0

    Diagrams assume that your display has been calibrated to a standard gamma of 2.2.
    Recall from before that the image file gamma () plus the display gamma () equals the overall system gamma (). Also note how higher gamma values cause the red curve to bend downward.

    If you’re having trouble following the above charts, don’t despair! It’s a good idea to first have an understanding of how tonal curves impact image brightness and contrast. Otherwise you can just look at the portrait images for a qualitative understanding.

    How to interpret the charts. The first picture (far left) gets brightened substantially because the image gamma () is uncorrected by the display gamma (), resulting in an overall system gamma () that curves upward. In the second picture, the display gamma doesn’t fully correct for the image file gamma, resulting in an overall system gamma that still curves upward a little (and therefore still brightens the image slightly). In the third picture, the display gamma exactly corrects the image gamma, resulting in an overall linear system gamma. Finally, in the fourth picture the display gamma over-compensates for the image gamma, resulting in an overall system gamma that curves downward (thereby darkening the image).

    The overall display gamma is actually comprised of (i) the native monitor/LCD gamma and (ii) any gamma corrections applied within the display itself or by the video card. However, the effect of each is highly dependent on the type of display device.

    CRT Monitor LCD Monitor
    CRT Monitors LCD (Flat Panel) Monitors

    CRT Monitors. Due to an odd bit of engineering luck, the native gamma of a CRT is 2.5 — almost the inverse of our eyes. Values from a gamma-encoded file could therefore be sent straight to the screen and they would automatically be corrected and appear nearly OK. However, a small gamma correction of ~1/1.1 needs to be applied to achieve an overall display gamma of 2.2. This is usually already set by the manufacturer’s default settings, but can also be set during monitor calibration.

    LCD Monitors. LCD monitors weren’t so fortunate; ensuring an overall display gamma of 2.2 often requires substantial corrections, and they are also much less consistent than CRT’s. LCDs therefore require something called a look-up table (LUT) in order to ensure that input values are depicted using the intended display gamma (amongst other things). See the tutorial on monitor calibration: look-up tables for more on this topic.

    Technical Note: The display gamma can be a little confusing because this term is often used interchangeably with gamma correction, since it corrects for the file gamma. However, the values given for each are not always equivalent. Gamma correction is sometimes specified in terms of the encoding gamma that it aims to compensate for — not the actual gamma that is applied. For example, the actual gamma applied with a “gamma correction of 1.5” is often equal to 1/1.5, since a gamma of 1/1.5 cancels a gamma of 1.5 (1.5 * 1/1.5 = 1.0). A higher gamma correction value might therefore brighten the image (the opposite of a higher display gamma).

    OTHER NOTES & FURTHER READING

    Other important points and clarifications are listed below.

    • Dynamic Range. In addition to ensuring the efficient use of image data, gamma encoding also actually increases the recordable dynamic range for a given bit depth. Gamma can sometimes also help a display/printer manage its limited dynamic range (compared to the original scene) by improving image contrast.
    • Gamma Correction. The term “gamma correction” is really just a catch-all phrase for when gamma is applied to offset some other earlier gamma. One should therefore probably avoid using this term if the specific gamma type can be referred to instead.
    • Gamma Compression & Expansion. These terms refer to situations where the gamma being applied is less than or greater than one, respectively. A file gamma could therefore be considered gamma compression, whereas a display gamma could be considered gamma expansion.
    • Applicability. Strictly speaking, gamma refers to a tonal curve which follows a simple power law (where Vout = Vingamma), but it’s often used to describe other tonal curves. For example, the sRGB color space is actually linear at very low luminosity, but then follows a curve at higher luminosity values. Neither the curve nor the linear region follow a standard gamma power law, but the overall gamma is approximated as 2.2.
    • Is Gamma Required? No, linear gamma (RAW) images would still appear as our eyes saw them — but only if these images were shown on a linear gamma display. However, this would negate gamma’s ability to efficiently record tonal levels.

    For more on this topic, also visit the following tutorials:

     

  • What are Artist’s Statements?

    An artist’s statement is sometimes referred to as a ‘statement of intent’. It can be seen as a marketing device, or simply as a means of describing practitioners’ interests. They:

    • vary in terms of their length and the details they cover.
    • may relate to a specific body of work or it may talk about practice more generally. probably contains information about any training (art college or other qualifications or experience relevant to their practice) and prizes, grants or awards that the artist has won, which are relevant to their practice. But is not the same thing as an artist’s CV, which lists any training, qualifications, awards, exhibitions and publications in much the same way as a conventional résumé.
    • huge variety in the style and format of artists’ statements; some will sound convoluted and esoteric and others will be more down to earth.

    The Artist Statement (UCA)

    A good artist statement will support your professional practice, for example:

    • Giving brief information to support an exhibition or catalogue
    • Submitting a proposal
    • Applying for a grant

    It should be:

    • Concise
    • Effective in communicating the details you wish to emphasize
    • Written in the first person
    • Written primarily in the present tense

    It should be adaptable in order to take into account:

    • Your audience
    • Your purpose or motivation for writing it

    It might contain information on:

    What your motivation is for the work you do:

    • What issues are you exploring and why?
    • What concepts, themes or convictions underpin your work?
    • How do your life experiences influence your work
    • How does your personality influence your work?
    • How have your ideas developed?

    The techniques and materials you use:

    •  How and why did you choose them?
    • What scale do you work in?
    • Do you have a particular process of working?
    • Do you intend to explore other techniques or materials?

    Your background:

    • Are you a student or a practicing artist?
    • Details of your educational history if you feel it appropriate
    • Have you contributed to any prestigious shows or events?

    How you contextualise your work:

    • Where do you feel you fit into the Contemporary Art World?
    • Does your work challenge the work of others?
    • Have you appropriated or referred to the work of others?
    • Your goals and aspirations and to what extent you have realised them
    • Personal reflections on your work

    Artists statements from other photographers

    Many photographers do not have artists’ statements on websites. They have a fairly straight biography, then either let the images speak for themselves, or put short text for each series of images and/or include interviews and articles where they talk about their aims and methods in some depth.

    Michael Tsegaye: – has a very short and succinct artist’s statement. Then informative overviews of his different portfolios. See post: Michael Tsegaye rough notes

    Nii Obodai – a biography and ‘meaning’ statement. All in the third person – I think this makes things less direct and more flowery. See post: Nii Obodai rough notes

    Mathua Mateka – quite a long artists’ statement with a lot of personal information that may or may not be relevant to understanding his photography. See post: Mathua Mateka rough notes.

    Emeka Okerere – another long one in third person. See post: Emeka Okerere rough notes.

    Paul Shambroom : very short, in 3rd person and mostly about his achievements rather than what he is trying to do. More of a biography.

    Alec Soth – example of understatement (in the knowledge that he is already famous!) Nothing about his approach or underlying aims.

    Jorma Puranen’s introduction to Imaginary Homecoming  cited in the coursebook is no longer at the link given. The definition of landscape:

    “A landscape is speechless. Day by day, its only idiom is the sensory
    experience afforded by the biological reality, the weather conditions, and the actions that take place in the environment. However, we can also assume that a landscape has another dimension: the potential but invisible field of possibilities nourished by everyday perceptions, lived experiences, different histories, narratives and fantasies. In fact, any understanding of landscape entails a succession of distinct moments and different points of view. The layeredness of landscape, in other words, forms part of our own projection. Every landscape is also a mental landscape.” (Jorma Puranen,1999, Foreword to Imaginary Homecoming, Oulu: Pohjoinen)

     

    5.6 My Own Artist’s Statement

  • Artists’ statements

    Exercise 5.7 Prepare your artist’s statement

    An artist’s statement is sometimes referred to as a ‘statement of intent’. It can be seen as a marketing device, or simply as a means of describing practitioners’ interests. They:

    • vary in terms of their length and the details they cover.
    • may relate to a specific body of work or it may talk about practice more generally. probably contains information about any training (art college or other qualifications or experience relevant to their practice) and prizes, grants or awards that the artist has won, which are relevant to their practice. But is not the same thing as an artist’s CV, which lists any training, qualifications, awards, exhibitions and publications in much the same way as a conventional résumé.
    • huge variety in the style and format of artists’ statements; some will sound convoluted and esoteric and others will be more down to earth.

    The Artist Statement (UCA)

    A good artist statement will support your professional practice, for example:

    • Giving brief information to support an exhibition or catalogue
    • Submitting a proposal
    • Applying for a grant

    It should be:

    • Concise
    • Effective in communicating the details you wish to emphasize
    • Written in the first person
    • Written primarily in the present tense

    It should be adaptable in order to take into account:

    • Your audience
    • Your purpose or motivation for writing it

    It might contain information on:

    What your motivation is for the work you do:

    • What issues are you exploring and why?
    • What concepts, themes or convictions underpin your work?
    • How do your life experiences influence your work
    • How does your personality influence your work?
    • How have your ideas developed?

    The techniques and materials you use:

    •  How and why did you choose them?
    • What scale do you work in?
    • Do you have a particular process of working?
    • Do you intend to explore other techniques or materials?

    Your background:

    • Are you a student or a practicing artist?
    • Details of your educational history if you feel it appropriate
    • Have you contributed to any prestigious shows or events?

    How you contextualise your work:

    • Where do you feel you fit into the Contemporary Art World?
    • Does your work challenge the work of others?
    • Have you appropriated or referred to the work of others?
    • Your goals and aspirations and to what extent you have realised them
    • Personal reflections on your work

    Examples from coursebook

    On the front page of Alec Soth’s website he writes:

    “My name is Alec Soth (rhymes with ‘both’). I live in Minnesota. I like to
    take pictures and make books. I also have a business called Little Brown
    Mushroom.” (http://alecsoth.com/photography)

    This is clearly very understated, perhaps even flippant, and it takes a reputation that precedes oneself to be able to write something as laconic as this! Often, an artist’s statement is written by another person (or is designed to sound as if it is by being written in the third person), which adds gravitas.

    Jorma Puranen’s introduction to Imaginary Homecoming is somewhat more convoluted,
    although it provides a thoughtful definition of landscape:

    “A landscape is speechless. Day by day, its only idiom is the sensory
    experience afforded by the biological reality, the weather conditions, and
    the actions that take place in the environment. However, we can also
    assume that a landscape has another dimension: the potential but invisible
    field of possibilities nourished by everyday perceptions, lived experiences,
    different histories, narratives and fantasies. In fact, any understanding of
    landscape entails a succession of distinct moments and different points
    of view. The layeredness of landscape, in other words, forms part of our
    own projection. Every landscape is also a mental landscape.” (Jorma Puranen,1999, Foreword to Imaginary Homecoming, Oulu: Pohjoinen)

    This statement about the work of Ola Kolehmainen is a good example of how a method of
    presentation is linked to the concept of the work:

  • Documentary Integrity and Truth

    Being there

    How will you operate as a photographer?

    Will you ask permission or will you be a fly on the wall,
    a ghost who never affects the image? This is a major question relevant to your production ethics.
    If you tell people what you’re doing, then they’ll react differently to you; they may be guarded or
    wary of how you’ll portray them.

    Photographers who lived with the communities they were photographing:

    • Chris Killip with the sea coal gatherers in the North East
      of England
    • Martin Parr with the people of Hebden Bridge in Yorkshire.
    • Bruce Davidson has done a similar thing in New York.

    This is very different to the approach of Garry Winogrand or Cartier-Bresson, neither of whom interfered or announced their presence. Winogrand operated like a ghost but got quite close to the action to produce his wide-angle views, whereas Cartier-Bresson
    remained peripheral, on the edge, trying not to be important to the subject.

    Truth
    This is a big one for the social documentary practitioner. The image has to have integrity – it has to be honest and factual in order to validate it for the viewer as an accurate portrayal. This is arguably where the boundary lies between social documentary and photojournalism where the image has more of an editorial purpose. In photojournalism the choice of photographer and the style of image-making will always have to suit the editorial nature of the publication and this is perhaps a bridge too far for the documentary photographer. Most documentary photographers would have no objection to any magazine or media publishing their documentary images provided that they were published as the photographer intended the viewer to see them, not cropped or enhanced.

    Kendall Walton ‘Transparent Pictures’

    ‘Ambiguities and discontinuities’  Berger & Mohr 1995 p91.

    Bazin

    For the first time, between the originating object and its reproduction there intervenes only the instrumentality of a non-living agent. For the first time an image of the world is formed automatically, without the creative intervention of man…in spite of any objections our critical spirit may offer, we are forced to accept as real the existence of the object reproduced, actually, re-presented…’

    The Ontology of the Photographic Image 1945

    Sekula

    If we accept the fundamental premise that information is the outcome of a culturally determined relationship, then we can no longer ascribe an intrinsic or universal meaning to the photographic image.

    On the Invention of Photographic Meaning 1997 p454

  • Reality and hyperreality

    This is a practice-based course so we won’t be going into detail about the nature of truth,
    hermeneutics, reality and hyperreality. What follows is a very brief summary. However you may
    want to research some of these areas for yourself. You could start by looking at some of the titles
    in the reading list at the end of this course guide.
    We all feel that we’re aware of reality but what is ‘real’? If you live in a desert region perhaps
    access to a tap with fresh clean water within five minutes walk is fantasy. This is not the world
    of the suburb where everyone has several taps of their own. Therefore one person’s reality and
    normality is not that of another. The freedom to travel is a reality for most of us, but not in
    all countries. As a tourist it’s possible to travel from region to region in Cuba but for a Cuban
    national it isn’t – unless you’ve got the correct paperwork. If we see a travel programme on
    TV we see a reality for the tourist, for the paying visitor, a valued source of national income.
    There may be many issues that are a reality of life in the countries we visit that we’re not aware
    of and wouldn’t like if we were. This is not to say that there is a judgment to be made by us.
    As a tourist with a completely different set of values that underpin your reality, how can you
    make such a judgment? This is where the photographer living within a society must start to
    consider the nature of the imagery that is produced. How will it portray the subject – in reality
    or hyperreality?
    Philosophers and critics talk about hyperreality where the human mind can’t distinguish reality
    from a simulation of reality. Hyperreality is what our consciousness defines as ‘real’ in a situation
    where media shape or filter an original event or experience. In other words, it’s ‘reality by proxy’.
    As photographers we need to understand where we are with the images we produce in terms
    of their reality. Is the image reality, a representation of reality or a simulation of reality? Jean
    Photography 2 Gesture and Meaning 47
    Baudrillard (1929–2007) was a French philosopher who addressed hyperreality and who talked
    about the nature of reality in terms of simulacra and simulation.
    The link below will take you to an interview that discusses simulacra and simulation. It talks
    about the destabilisation of the media and our ability to identify what’s real and what’s not real.
    www.youtube.com/watch?v=80osUvkFIzI
    In a nutshell, simulation is the process whereby representations of things come to replace
    the things being represented and indeed become more important than the real thing. At the
    extreme, you end up with a simulacrum which has no relation to reality. So an image may:
    • truly reflect of reality
    • mask and pervert reality
    • mask the absence of reality
    • bear no relation to any reality – it is its own simulacrum.
    We need to be careful to avoid simulation lest we engage with hyperreality as a reality without
    recognising its values.
    Jorge Louis Borges (1899–1986) in his work An Exactitude of Science (1946) describes
    hyperreality as “a condition in which ‘reality’ has been replaced by simulacra.” Borges felt that
    language had nothing to do with reality. Reality is a combination of perceptions, emotions,
    facts, feeling, whereas language is a series of structured rules that we need to obey to help
    others perceive our reality. The same is true of visual language.
    Baudrillard argues that today we only experience prepared realities – edited war footage,
    meaningless acts of terrorism, the Jerry Springer Show:
    “The very definition of the real has become: that of which it is possible to give
    an equivalent reproduction… The real is not only what can be reproduced,
    but that which is always already reproduced: that is the hyperreal… which
    is entirely in simulation. Illusion is no longer possible, because the real is no
    longer possible.”
    Baudrillard argues that we must attain an understanding of our state of perception and the
    message content that is communicated in terms of the images we construct. Even if we see it as
    a straight record of what we saw on the day, can this be a reality and, if so, for whom?
    48 Photography 2 Gesture and Meaning
    Documentary photographers are entering a new age with a new set of criteria. The issue of
    technical quality is not relevant. Images from mobile phones that capture the ‘moment’ will
    be printed by newspapers if the image tells the story. Where then is the need for a bag full of
    cameras and kit? In a new age of photography the documentarist will need to engage with the
    issue of hyperreality by establishing credibility, motivation and integrity. The reputation or name
    is then the status giver, the endorser of reality and truth to the images produced and offered to
    the media and the public.

  • Landscape and Identity

    The concept of ‘identity’ is central to most landscape photography – the cultural, historical, ecological and industrial factors shaping identities of people and places and the ways in which the two interact. ‘Identity’ is however not fixed. Individuals and groups of people are continually trying to reconcile multiple and changing identities as a means of making sense of their place in the world. Identities are constantly manipulated and contested by others in political processes. In the same way, meanings of ‘landscape’ and symbolic associations of places are also multi-layered, changing and often manipulated in attempts to shape power relationships between people and groups of people and peoples’ control over and use of ‘nature’ and other resources.

    In deciding how to portray particular landscape/s key considerations are:

    • Who created, owns, uses and changes this landscape? How do these people relate to each other?
    • How is this ‘landscape’ distinguished from other similar places (who decides what is and what is not similar? by what criteria? why are those criteria important?)?
    • How do (different) users and inhabitants of a place feel towards (different aspects of) the landscape (pride, indifference, disrespect, fear of loss)?
    • What attitudes do (which) outsiders have towards it?

    Underlying all these considerations must also be a consideration of:

    • How are these feelings, identities and relationships manipulated, why and by whom? (See Part 3 landscape as political text)
    • Self-awareness on the part of the photographer of their own identity/ies and assumptions and power/desire (or lack of it) to manipulate and change things.

    See posts:

    Dana Lixenberg’s:  Last Days of Shishmaref
    Jacob Aue Sobol’s work Sabine (2004)

    ‘British-ness’, collective identities and the countryside

    “The concept of the countryside is a significant element of the British identity. All countries have rural areas, but Britain’s is one of its ‘unique selling points’.” (Alexander p119)

     4.2: The British landscape during World War II

    Attitudes towards social issues like renewable energy or housing policy are often polarised by ‘Not in My Back Yard’ ‘visual impact’ on the land according to rather idealised ‘picturesque’ notions of what the landscape used to/should look like.

    Personal identities and multiculturalism

    British photographers have questioned established and stereotyped images of the British landscape and its heritage. Photographers like Godwin and Darwell manipulate aesthetics of the image, beauty in texture, pattern and atmosphere to keep the viewer’s attention – then guide it to pose more challenging and shocking questions about the landscape and peoples’ relationship to it. The effort of extracting meaning in this way also makes the images more memorable. See posts:

    • Immigration and race:  Ingrid Pollard and Simon Roberts.
    • Access to the countryside:  Fay Godwin
    • Environmental pollution and degradation: John Darwell Dark Days (2001 Foot and Mouth Outbreak).
    • Relationship with animals: Clive Landen: sharp documentary style and brutal but images of death in Abyss (2001 Foot and Mouth Outbreak) and Familiar British Wildlife (series on roadkills).

    4.3 A subjective voice

  • Photography, memory and place

    “… in Photography, I can never deny that the thing has been there. There is a superimposition here: of reality and of the past. And since this constraint exists only for Photography, we must consider it, by reduction, as the very essence, the noeme of Photography.” (Barthes 1982,p.76)

    Photographic images affect the way we remember moments we experienced ourselves, and our impressions of things we experience via the image alone. Barthes also proposes how the photograph can act as a “counter-memory”, aggressively blocking impressions formed by our other senses as it “fills the sight by force” (ibid, p. 91 quoted Alexander 2013p107).

    Many practitioners have engaged with idead of personal memories (family albums, holidays) in one form or another:

    • Trish Morrissey
    • Gillian Wearing
    • Joachim Schmid.
    • Peter Kane goes back to places depicted in his family’s photo album and re-photographs and superimposes the images.

    Photography has also been used to explore and challenge the construction of collective memories (eg documentation of ‘early’ or ‘late’ photography as well as events unfolding)

    • Shimon Attie uses contemporary media to explore relationships between space,time, place and identity working with communities to find new ways of representing their history.
    • Jeff Wall produces large tableaux of events, or staged events, referencing the way history painting interpreted and often glorified historical events.
    • Luc Delahaye also references history painting, using large format analogue cameras to document meetings, political ceremonies and war zones.

    But as Bates cautions (see also my reaction to Meyerowitz):

    “As sites of memory, photographic images (whether digital or analogue) offer not a view on history but, as mnemonic devices, are perceptual phenomena upon which a historical representation may be constructed. Social memory is interfered with by photography precisely because of its affective and subjective status…in terms of history and memory, photographs demand analysis rather than hypnotic reverie’ (Bate The Memory of Photography pp255-256)

    The matter of ‘reality’ is an important aspect to consider in relation to all areas of photography: who is recording what, why, for whom and why?

    3.5: Local history

    3.6: ‘The Memory of Photography