Why SOOC still isn’t really workable

X1D5_B0002671 copy

Of all the 1500+ posts I’ve made here, I can’t recall ever exploring why SOOC (straight out of camera) images are – let’s not say ‘bad’ – but inherently compromised, at least given the current state of technology. No matter how ‘natural’ a company claims its out of camera rendition to be, something will always be missing for the simple fact that no current camera can read your mind.* Every situation/ scene/ composition is different; every photographic intent is different and every single set of ambient parameters (light, subject position, etc) varies from image to image – maybe not very much, but enough that it doesn’t take a whole lot of change to make a very different image than the one you intended. Two things here: intention, and uniqueness. And uniqueness is at the core of why we find ourselves compelled to make a photograph at all: something stood out enough to make us sit up, take notice and either want to remind ourselves of it again later, or share it with the world.

*I am leaving myself room for some seriously heavyweight machine learning algorithms in case this article is read in posterity. And more on the machine learning later.

But leaving the processing to the camera and accepting its default output as final – and by extension, the engineers who designed and built it – is no different than leaving your film at the drugstore of 20 years ago, and being happy with whatever comes out at the other end. For those of you who remember when that kind of photography was common – go on holiday, take pictures of something you thought was interesting or memorable, drop off films, look at small prints once, forget, repeat – you’ll also know just how much of a difference it made when you took control and did your own developing and printing (assuming you did). Same thing with JPEG and RAW and the dirty ‘Photoshop’ word: done right, it’s nothing more than taking back creative darkroom control because the camera can’t read your mind, there aren’t a universal group of settings that work for everything, you can’t always get it perfect out of camera even if you try, and dammit, artistic expression requires some flexibility.

There’s one more elephant in the room, and it applies even if and when you get everything right: today’s cutting edge sensors push 15 stops or more of usable dynamic range at base ISO; even the lesser-performing smaller ones in a smartphone might do 8-10 on a good day and with a tailwind. Not every scene (read: every idea) requires that kind of latitude, and even if it does – presenting it linearly, as sensors tend to do, doesn’t make for a particularly nice image. The reason behind this is nonlinearity of human vision; we see much more tonal extension in the highlights than shadows, with a drop in saturation at either end (moreso in the shadows) thanks to the physiology of the eye. Whilst the better in-camera processing algorithms reflect this, they do not – and cannot – make up for the sophisticated processing that the brain does in local areas of an image to keep everything ‘within range’, or the filling in of gaps from memory. But, as we all know: due to the way the signal is measured, there’s more information gathered in the brighter stops, which translates to smoother tonal gradation, more accurate color and no posterisation.

For example, medium format cameras tend to do well on color and tonality for several reasons: bigger pixels that gather more light (higher signal to noise ratio at a given luminance level); often 16 bit processing pipelines (a more finely graded scale for measuring that signal); and – at least in Hasselblad’s case – individual sensor calibration and some sophisticated ADC algorithms. But still: put a very low contrast, low-saturation scene (think foggy morning or landscape by moonlight, for example) into a Hasselblad sensor and you’ll land up with a very flat image – which is definitely not what we perceive. Worse, if we are exposing for optimum data collection: it’ll be bright, desaturated and without very much detail at all. Yet if we expose for final output, we might not have enough tonal values to capture all of that differentiation. Exposing to the right gives us more data, but the necessity of manipulating it later on. Next lesson: perception isn’t the same as absolute measurement in photography.

Viewing a static (finished) image isn’t the same as a live dynamic realtime scan of the scene – our eyes are constantly scanning and brains piecing it together through persistence of vision – which also means that presentation size tends to be of a dimension we can take in at one glance. The dynamic compensation for different areas of the image doesn’t happen, so we must process that in, giving rise to the necessity of things like dodging and burning and gradients. These are of course highly subject and composition specific, which frustrates any easy attempts at automation. Global highlight or shadow recovery, for instance only gets you so far: software algorithms aren’t yet smart enough to differentiate between what should have recovery applied to look natural (e.g. skies) and what should not (e.g. hard reflections in a mirror or glass).

X1D5_B0002687 copy

X1D5_B0002687 copy

The images used to illustrate this post were chosen deliberately to illustrate that: the main image is of course the final intent with local adjustments; the above two are not. The first is the technically ‘correct’ exposed-to-the-right exposure straight out of the camera; the second is exposed as close to final output intent as I could manage; you can clearly see something is missing in the shadows, and there’s nowhere near as much separation in the water. On top of that, neither ‘standard’ image has as much vibrance in the spray rainbow as I remember.

Could we eventually get to the point where programming is smart enough to recognise subjects? We’re already there. But to the point where the code can guess the presentation intent from the spatial arrangement and thereby determine what needs to be recovered or what needs to increase in contrast…I think that’s a lot tougher. Theoretically, if you showed the computer enough images of similar subjects with processing you liked – or possibly even images of any subject with processing you liked – it could analyse local contrast parameters in both luminosity, color and spatial frequency and then build a database to process an input image to taste; I’m pretty sure that’s what existing attempts at doing this have been using. Train it enough and you’ll land up with a pretty good bot for producing images that fit a certain predetermined style. Hell, even the iPhone’s ‘portrait’ mode uses a combination of pattern recognition and AF data from both cameras to figure out what is a ‘person’, where the physical boundaries lie, and how much defocus should be applied to subjects at various distances. (It’s not infallible though, as the ‘teapot’ stance inevitably produces a hole between the arms or legs that is still jarringly in focus.)

I can imagine this being very useful for people who have to produce a lot of images quickly, but are willing to settle for a little bit of compromise against volume – event or wedding shooters, for instance. But if you’re an art photographer shooting a project with the specific intent of making it look visually unique (a reasonable assumption) then it might not work so well because there’s no precedent to follow. You could probably tell the computer not to process like anything that exists already in the database of images, but this is one step away from from proof by exclusion. But you know what? Like every other tool, I suspect once machine learning becomes familiar enough, we’ll learn to live with it – either using it to get us closer and then doing less ourselves, or perhaps spending more time training it to deliver the output we personally find attractive. And having spent pretty much my entire photographic career up to this point with a severe SOOC JPEG allergy – it may well prove to be a hidden blessing by reducing time spent behind a computer. Such time could certainly be more productively spent in the field – and that kind of exploratory, experimental photographing that pushes creativity is something I’ve been doing far, far too little of late. MT

__________________

Visit the Teaching Store to up your photographic game – including workshop videos, and the individual Email School of Photography. You can also support the site by purchasing from B&H and Amazon – thanks!

We are also on Facebook and there is a curated reader Flickr pool.

Images and content copyright Ming Thein | mingthein.com 2012 onwards unless otherwise stated. All rights reserved

Comments

  1. Very useful entry, I will definitely come back to your blog, because I have been taking photos for some time 🙂 I need more such interesting information.

  2. Frans Richard says:

    Great article, although I don’t entirely agree with the title. I would say ‘Why SOOC still isn’t really workable, for an artist’.

    I think an out of camera jpeg is like an engineer’s attempt to produce an image that is as close to reality as possible, whatever ‘reality’ may be. If the engineers get it exactly right, every jpeg of a particular scene taken from the exact same place at the exact same time will look exactly the same when viewed on the same screen, no matter which camera was used. That is OK if that is what you intended, and for many people, that are ‘just’ recording memories, it probably is. The engineers are getting real close, close enough for many, so I think SOOC is workable for many people nowadays.

    An artist, in contrast, wants to create something unique. Something that expresses his or her vision, not necessarily reality. Probably anything but reality. That is why SOOC isn’t really workable for an artist, because, as you said, the camera cannot read your mind, and hopefully never will be able to. That is why artists need to process RAW images to their intent. Perhaps AI will be able to simulate part of someone’s intent sometime in the future, and that could save us time behind the computer screen, but I think, and hope, the human factor in full will forever remain a mystery that cannot be captured in an algorithm.

    • Actually, I think this is a very fair statement – good points all round!

    • Very well said, a very balanced observation. Jpeg isn’t for everybody, and the same goes for RAW. Jpeg has come in for an element of derision in some comments at the expense of RAW. RAW doesn’t make anyone a better photographer, it’s simply a means to fine tune the image. And to use Ming’s references to the culinary art, a chef seasoning a dish with the relevant amount of salt and pepper.

  3. Per Magnussen says:

    I’m sure you’re right. It is a paradox, though, thet mane photographers, especially landscape photogrophers, even the pros, would have gotten better results i they left ut to the camera. False candy-colours dominate the genre. I have small collection of coloured postcards from Norway from around 1900-1920. The glass plates were painted before printing, I think. They are beutiful in a strange way. They remind me of modern digital landscape photography. I guess it is a question of restraint.

    • The false candy color is another good example of why the camera can’t read the photographer’s mind – who’s think that was actually desirable? It’s only necessary because what was actually seen/interpreted by the viewer probably wasn’t translated well by the camera, and we then have the false bias of memory…

      • Andrew Franta says:

        It seems to me that’s the wrong way ’round. Rather than a product of poor translation by the camera and false memory, I think candy-colored landscapes are a genre made possible by digital processing. (This is intended as a description rather than a criticism.)

  4. Strange digital world. In film days photographers got great results with SOOC Kodachromes, not talking about B/W.

    • Ha! Ha! Nice one. But then there was all the faff of getting the screen out, aligning the projector up and no doubt moving some furniture around in the process. All worth it, though, to see those glorious colours on a 60″ screen. Knockout, with my 6×6 slides fully displayed at 60″x 60″.

  5. Wonderful article and one I’ll be putting in front of my photography students. I teach them the RAW file is your negative. You should nail as much in-camera as possible – but from there you have got to take that negative in the darkroom and bring out the rest of your vision. Thank you!

  6. David Bateman says:

    This is an interesting article. One thing people are missing though is that there is no true color or true range. Genetically we all actually see color and transition differently. Think about three color cone receptors, what they are sensitive to is similar to the range of dyes used on sensors and people’s various degree of color blindness. Range is similar with our Rod receptors. So the transition of the rainbow would actually look different to 10 people all standing there. I have a friend who has very shallow range and night blindness, when biking with him at night it looks like he can go faster than the speed of light. At a point he can move faster than he can see.
    So we add our own perception to our images to tell the world what we see, some equate this to a personal style.
    I think camera manufacturers have gotten better at finding the average, what most people see and this is why SOOC has gotten better. Also it may be that a distribution of people with similar perception will all buy from the same camera manufacturer, as its closer to what they see. This is how you can explain the Olympus color or green.
    So my point is its impossible for a camera, which a manufacturer hopes to sell a million to provide an out of camera image that all 1 million actually saw. The variation in colors and tonal transition will be too high in the market.

    • Actually, there is such a thing as true color – if an absolute device measures RGB 120, 140, 165 and another can reproduce it – then I would consider that absolute from a colorimetric sense. Without this we wouldn’t even be able to reproduce to an approximate level, much less have Pantone charts.

      However: if you also consider the whole question of end viewer perception, I agree – that’s something else entirely, and there is no such thing as absolute by any means. My guess is SOOC has improved for several reasons – not just the averaging of perception, but also improvements in collecting raw input data off the sensor and then in the processing algorithms and hardware. We’ve also got much better displays in both spatial density (translating to representations of gradation) and gamut. A bit everywhere makes for a very tangible change from say 10 years ago. Put another way: we might have captured 5MB of information before and could faithfully reproduce 2MB of it; it’s now 50MB and 30MB (hypothetically). Even though the reproduction will never be 100% of capture, that limited reproduction is still going to be visibly far better than even the previous input limit…

      • David Bateman says:

        Yes, I was going to use the numbers of a rgb for this example. We can place specific numbers for different “colors” but the perceived end user color will be different. So for me its an orange tint, but you its a yellow tinted.
        Although with more numbers the hard change become softer. So you are correct in your final example. Imaging just 8 values. My blue will not look like yours. But with 2 billion values they start to look similar. Other aspects specific to people are vibrance, saturation, and intensity. So there can be many variables added. But you know what they say, with enough variables you can fit an elephant.

  7. Ming, I think your instruction in your email school and workshops said it best: the post processing is used to place the information you want at the tonal level that you want. Do you want a high key or low key photo? No algorithm can determine that. Where do you want the contrast and where do you want to reduce contrast? And so on.

    I always think of the camera collecting more information than I can use, and it’s my job as the photographer to highlight the information that expresses my intent for the photo best.

    • “Do you want a high key or low key photo? No algorithm can determine that.”

      Exactly!

      Photographing in the field is like hunting on the shelves at the supermarket – curating is like cooking – and the output is like plating for service…

  8. There is one camera that lets you adjust the OOC JPEGs more than any other and that’s the Olympus Pen-F. You can adjust the individual colours on the fly to create your own unique colour profile for each image. Prefer b/w? There’s numerous adjustments you can make including grain, vignetting, gradients and colour filters with multiple strength settings. Saves all that mucking about on the computer. The JPEGs are generally so good I can’t replicate them from the RAW files in photoshop or lightroom.
    Of course there’s always a place for the latter to change the original images.

    • I think all of the Olympuses after the PENF will also do that – Robin?

      • No they don’t have the level of control over the profiles of the Pen-F with it’s creative dial. There is a colour profile editor where you can make some minor adjustments but only the Pen-F lets you dealt create your own style and readily adjust it to suit each image.

      • Robin Wong says:

        Olympus only introduced that level of customization in the PEN-F with color profile control. Having tried that, I would still prefer to post-process my colors to taste using a RAW file. The JPEG customization, as useful as it is, can be quite limiting, because not all lighting conditions and shooting environment are the same. Even slight variation will cause a tiny shift of color balance.

    • “The JPEGs are generally so good I can’t replicate them from the RAW files in photoshop or lightroom” — again, unless the camera manufacturer makes their underlying debayering methods known, this will continue to be a problem. There is a long workaround, however, by using the proprietary software supplied to generate a baselined TIFF, then bring the TIFF into LR or PS or run through plug-ins like Perfectly Clear…

      • Ian Parr says:

        “JPEGs are generally so good I can’t replicate them from the RAW files in Photoshop or Lightroom” – I first noticed a similar issue with my Olympus XZ-10 compact. Sometimes there were aspects of the JPG output that I couldn’t get Lightroom to replicate. At the same time, Canon RAWs processed in Lightroom always seemed to give better results than the OOC Canon JPGs. I could not work out if this was Olympus doing a great job with their in-camera processing, Adobe doing a poor job of processing the Olympus RAWs or just a quirk of this camera.

        Recently, I found that with my Panasonic GX80 set to RAW only, the in-camera preview image was relatively low resolution and I couldn’t use it to properly assess focus accuracy on the camera LCD screen. With the camera set to RAW+JPG the preview is higher resolution, presumably it is using the recorded fine JPG.

        Now I tend to shoot RAW+JPG on all my cameras for general photography to cover all eventualities. I use a RAW workflow but can refer to the OOC JPG for a sanity check if needed.

        • I understand that if you shoot RAW only, the camera generates a low-res jpeg purely for viewing on the screen. the “+jpeg” option displays the jpeg using whatever the quality that was selected.

          • Correct. Which is why + jpeg is not a bad idea to at least get an accurate idea of whether you hit focus or not, plus with extremely flat settings dialed in to roughly judge exposure latitude. Still, not useful for final presentation (and really something that should be baked into the raw file to begin with).

  9. Does the same argument not also apply to dropping off a roll of film at ‘One Hour Photo’ vs either processing the film yourself, or getting it developed by someone a bit more expensive than your local pharmacy? I think OOC has its place for anything time critical (or for just recording memories accurately enough).

  10. Bruce McL says:

    We are entering the age of computational photography, and delayed processing outside of the camera allows more processing power to be used in the computations. In order for delayed processing to win over SOC every time, two things have to be true:

    1) The unprocessed file contains all of the information the camera used to make it’s processing decisions.

    2) The external processing software has access to all of the tools or “looks” that the camera can apply to the image.

    As an example for item 2: Olympus processing software has access to Olympus looks, Lightroom does not. Fujifilm does not make their own processing software, and they give Lightroom users access to their film looks.

    What opened my eyes to all of the computation going on inside of a camera was shooting JPEG + DNG with my iPhone, and then trying to process the DNGs to match the JPEGs. Apple does a lot of work on the SOC images, and their processing varies a surprising amount from image to image.

    • Bruce McL says:

      Sorry, I meant “from scene to scene” at the end there. Apple does a surprising amount of adjusting color and tonality for different scenes. Turning down color saturation in night scenes is one example.

    • Agreed on both counts, but assuming you want the ‘look’ the manufacturer has determined is ‘right’…which I suspect means that for casual snapshots or images of record, computation helps with speed and general workflow; for anything creative, it’s a hinderance because things don’t turn out as you might expect.

  11. What 99% of people did 20 years ago:
    “…go on holiday, take pictures of something you thought was interesting or memorable, drop off films, look at small prints once, forget…”

    What 99% of people do now:
    “… go on holiday, take pictures of something you thought was interesting or memorable, upload photo, look at small images once, forget…”

    • Pretty much, except it’s not 2-3 rolls of film, but thousands of snapshots of (mostly) mundane rubbish or stuff to show off on social media…

      • To be honest, I feel you’re not quite doing the old analog pictures justice with the ‘look at them once, forget’ part. There were these simple photo albums that allowed people to easily revisit their photos, and they were really popular, as I remember – I wouldn’t be surprised if even today, people revisit their old digital pictures much less than their even older analog ones

        • It’s much easier to flip through an album than boot up a computer, dig through files and then separate the ones you might want to look at. I suspect it’s probably very different for social users; I know I don’t tend to revisit images unless I need to find something for a client or specific purpose – I like to believe that the next image is going to be better than the previous one; if not, why continue shooting if your best work is behind you? 🙂

  12. Tuco Ramirez says:

    I’ve tried a Fuji camera for the last couple of months and it seems they emphasize jpegs very much, presenting them in the film processing model you mention. The result is somewhere between the old drugstore prints and tailored processing.

    • I find the Fuji in-camera engine in general does a very good job with JPEGs, but a really poor job with RAW – there’s more latitude in the JPEG and more sensitive tonal handling than you’d get from most other cameras, but the RAW files lack latitude especially in the shadows.

      • Tuco Ramirez says:

        Yes, I spend much more time with them and it’s a relief to work on nikon images now. But, the fuji kind of poked me into a new flow. I’ve been testing their different profiles to proof. Some have ‘interesting’ aspects that I reverse engineer, and then improve from there with the raw copy. I think fuji is exploring how to make PP proprietary again (like film). That may be good enough for most of the market.

  13. While everything said about the advantages of RAW is true, a few disadvantages have to be mentionend es well: 1) Time. I often find that I have most time to develop images DURING the holidays while on my way taking pictures or shortly thereafter (in the car, bus, train or plane) and that I clearly lack this time after coming home. I thus love cameras like Pentaxes and Fujis which allow for in camera raw development. Some (like Pentax XP and K1ii) have very powerful noise reduction algorithms integrated in something like a mathematical coprocessor.
    2) My memory of colour hues and casts of original sceenes is weak and error prone (especially with at times weeks between capture and RAW development). “On site” RAW development allows me to get colour temperature, brightness and contrast very close to what I actually see. This is a real bonus in anything remotely scientific or if you are making a real effort to document things like they presented themselves (and not to what they should look like according to some ideal or interpretation). Recently I have been attracted to this approach more and more for travel photography. Many cameras allow monitor calibration nowadays so that some issues of the past are now greatly reduced.
    Just two thoughts that have swayed me more than once to go for in camera development.

    • I like the idea of shooting as much as possible while ‘there’ and curating later – on the plane, if the seat in front hasn’t crushed my laptop 🙂 A lot of cameras already do allow for in-camera processing of raw, but the size of screen and limited gamut of the LCD itself is something else, of course – and without the latter, any processing is going to be of limited use…

    • I like the idea of shooting as much as possible while ‘there’ and curating later – on the plane, if the seat in front hasn’t crushed my laptop 🙂 A lot of cameras already do allow for in-camera processing of raw, but the size of screen and limited gamut of the LCD itself is something else, of course – and without the latter, any processing is going to be of limited use…

    • Knut, re bullet point 2. You won’t be the only one whose memory recall will be poor. I doubt if anyone could accurately remember a scene so long after the event. We believe we do, but for the most part we are idealising what we thought we saw. That is a good feature of the Fuji; having an instant reference back to the subject in the field. I don’t know if my older Fuji X-Pro 1 or X-E1 can do this, so I will be checking them out.

  14. Thanks for another well written article Ming

    No matter how good the SOOC is (and it usually isn’t) I like to think of editing RAW as part of the process.

    I do wonder if some day soon (in fact fuji have recently released a rudimentary version of what I’m about to suggest) we might be able to connect our cameras to our computers and do a large amount of PP via a GUI but using the camera as the processor

    I don’t mean (using the Fuji tool as an example) simply changing the default jpeg parameters and select a colour (eg “velvia”) profile, but have tone curves and highlight/shadow etc (basically the standard exposure and sharpening/NR tools available in all the current editors) in order to tweak “SOOC” to what we want.

    I appreciate that this won’t cover layers, masks, cloning and other very specific image edits. But it seems to me that we currently enjoy either a full on computer based editing solution or the “drugstore print” SOOC solution and I think there’s scope for something in the middle

    After all, cameras turn raw into jpegs far quicker then any raw converter, and many togs often have a need to quickly apply the same settings to many shots (event togs for example)

    Someone above commented that cameras are often judged on their (largely redundant to most users) SOOC output, but their raw output is often judged by the available raw converters, which ends up with internet narrative such as don’t use Adobe with Fuji or C1 is the best for colour on Sony etc

    Do you think that the lines between SOOC and raw edited images might become semipermeable in the future?

    • Fundamentally: it’s the difference between shooting film and sending it to the lab, or doing your own developing. Whilst the lab might sometimes suffice…if one is going to the effort of travelling, shooting and investing significant time and cost, I don’t think you’re going to ever be happy settling for automatic. I’m not even talking about layers and masks, but just basic global adjustments to contrast and tone curves.

      There’s increasing customisation in jpeg output in-camera, but the fundamental problem is this: there’s no one-size fits all group of settings that can work for all scenarios; something heavily backlit is going to require very different treatment to a flat cloudy day. You cannot have a curve that works for both to produce a pleasing image (accurate is easier, ironically – but given most photography is to convey a subjective impression, we probably don’t want this so much).

      • Sorry I’ve not explained myself well (I tend to read you first thing in the morning, not my sharpest hour)

        Imagine (say) a Lightroom type interface on your computer

        You connect your camera to the computer, the raw is on the memory card in the camera

        You browse the raw files and edit them as desired, via the app on the computer copying and pasting edits where needed (like using LR), then when satisfied you hit export to disk and the camera does all the heavy lifting, giving you the sooc jpegs on the hdd of your computer

        So I don’t mean that the camera should have (say) a highlight slider, etc

        I mean that the camera OEM might want to take ownership of the end to end raw to jpeg process via its own external app that functions in concert with the camera hardware, rather than the incumbent solution of either accepting sooc or accepting a third party’s (eg Adobe) editing algorithms

        I’m still not convinced I’m explaining it very well

        • I think I understand what you’re suggesting – most camera makers have their own software that does just this; it’s not very good, however. This tends to be the case since few resources are put in as it’s not profitable. We (Hasselblad) probably invest more than most, and even then…the summary of feedback is ‘why can’t it be more like Photoshop?’

          • David Bateman says:

            No I think your still missing his point. What I think he is implying is a full blown amazing image editor with a terminal on whatever computer or tablet. The edits are visible on the tablet/junk computer, then when done the camera does all the GPU/CPU work to get the final desired image. This would be a good solution and something benificial on the go.

            • There’s something like this already built into most cameras when you choose the ‘develop raw’ option – I know my Nikons have had it for some time…

              • Yes, most cameras have an in-built raw convertor, but they basically allow you to change your mind about &/or set the very limited in-camera jpeg options after the shot has been taken (as long as you shot raw)

                Typically most in camera options amount to low, medium, high, or standard, saturated, monochrome

                I’m talking about all the options you might find in LR, but being able to control them off the camera

                • Actually, the new ones are much, much more advanced than that…you even get curve control on some, plus click to set WB. My D850 even has perspective straightening, distortion correction and batch processing options (!) Maybe what you need might already exist?

                  • Just add all of that functionality into computer software that allows you to see it all on a proper screen and apply changes to multiple images.

                    As it stands using the in-camera raw convertors is a little like using a calculator watch – fiddly with a hard to see screen

                    Fuji have recently released something like this, but the failing is that it only offers what you can do in camera, for the SOOC only users it must be great though, as they can get what they’re happy with (no judgement from me on SOOC shooters, and I shoot Fuji myself*) but with the convenience of using a computer.

                    *I’ll download it and have a play, but I’m not really into SOOC jpegs (even Fuji ones :-D) so it has limited use for me

                    If someone made that work with the functionality of a full fat raw convertor, it could prove to be very useful

            • Spot on David, thank you

          • Yes, more like photoshop (or whatever) but operating within the walled garden of the camera’s own hardware

            Camera OEMs investment is more in making SOOC jpegs than RAW editing software. (And we tend to throw the jpegs in the bin, that’s if we even bother to turn jpeg on in the first place)

            My idea is about using the existent tech within the camera, but with a photoshop type front end

            This way the OEM owns more of the process over the creation of the image. The RAW has the same data in it no matter the software used to edit it, the camera is already capable of creating jpegs from raw

            Better OEM standalone software isn’t the answer… I can’t see you (Hasselblad) making a top draw editor that beats PS/LR and is open to the raw of every brand of camera, you’d also have to keep abreast of computer OS updates, new chipsets etc

            You own the hardware in your cameras, as it currently stands – the quality of your images is a little at the mercy of the brand of SW used to edit the raw.

            I’m sure we’ve all opened the same raw file in X number of different editors and been amazed at just how different they all look.

            You might even find that SW X works best with camera brand A, but camera brand B prefers to be edited with SW Y

            Tie the camera to the software, use the hardware that’s your own spec (the camera) and provide industry standard controls, via a computer based GUI to offer standalone raw SW levels of control over the jpeg engine that’s already in your camera.

            That’s all standalone raw SW is really

            a Jpeg engine.

            And if LR (etc) can take brand X raw and push it Y stops in the shadows, then there’s no reason that the camera can’t do this as it’s already deploying tone curves and colours etc when it makes a SOOC jpeg. It’s just operating within narrower parameters than you get in LR etc

            The hardware’s already there in the camera. The software’s already there in the camera.

            We just need to a way to own that relationship via the convenience of an external device for the camera’s image pipeline to be as useful as a standalone raw SW,

            The inverse could also be true…. (say) Adobe make a camera that’s harmonised from the ground up to work with their software products

            SOOC jpegs are like drug store prints

            Removing RAW from the camera and developing it is like having a home darkroom

            But IMO these are increasingly antiquated ways of seeing photography

            I’m talking about the adroit capabilities of standalone raw SW, with the speed of the in-camera processing, working together snuggly in the walled garden of the OEM’s hardware

            This might all be doable wirelessly, via an iPad pro or something… shot raw, make edits on iPad, instruct camera (via iPad) to create your Jpegs.

            In a sentence and in summary… (finally 😀 )

            Reduce the role of the computer to that of a thin client

            • adambonn,

              Interesting solution but I feel until the camera manufacturers release their underlying debayering methods to Adobe, PhaseOne, DxO, etc., workflows will continue to be rather time-consuming and in some instances the results end up less than ideal no matter how we move the sliders…

              • Actually, I can say with probably more authority the most that there’s a lot more collaboration on camera calibration between manufacturers and Adobe than you might think – however, whether Adobe chooses to implement is another thing entirely…

                • I have noticed with the latest ACR update there is noticeable improvement in the treatment of Nikon NEF files…

            • I think I understand what you’re suggesting – an end to end workflow under the camera maker’s control.

              1. I would assume this makes sense if you do the edits on/in camera, which are then limited by the camera’s monitor. Note: the design/technical requirements for a portable monitor are very different to a desktop one; you can’t easily have both wide gamut and outdoor viewable brightness, for instance. This in itself limits usefulness.
              2. The way computing hardware works, own software can still be happy under various GPU environments – there aren’t that many different ways to code for. And you’d have a lot more power than in-camera, plus solve the monitor problem.

              Bigger question: what if you want to have a ‘taste’ other than what the manufacturer thinks is ‘right’? They’re not going to give you options that they don’t agree with if the environment is 100% under their control. Plus if you use multiple cameras from multiple manufacturers – as many of us do – a universal converter is often the best way to get consistent results; you don’t really want to have to select systems based on jpeg output (!)

              • “an end to end workflow under the camera maker’s control.”

                Yes this! Except perhaps more, ‘an end-to-end process under YOUR control, but with the camera makers know-how of the best way to process their raw data’

                1. Yes, this is why the image processing would need to be in the camera, but the review of that processing and the decisions around it needs to be on your computer, or pro tablet or whatever one feels is adequate, certainly not the rear LCD though, that’s no where near big enough, either in size or spec

                2. It’s just a hunch… but I’m not convinced that a camera can’t make jpegs quicker than your raw editor can… Asking LR or C1 or whatever to churn out 100 edited RAW files into Jpegs takes a while, a camera can do this quickly. I just want to be able to control the edits on the jpeg with as many tools as a standalone editor (like in LR/C1/etc)

                Not sure what you mean by ‘taste’ – many of the camera OEMs are a little in bed with an external software company, (eg Leica and Adobe, Sony and C1) and yet you can still overwrite (say) the embedded Leica profile in LR, with the adobe one, so maybe this doesn’t bother camera makers that much? If by taste you mean colours and tones, then the level of SW I’m envisaging would certainly have tone curves and a HSL tools. Make your own taste.

                I think your final point is really the nail in the coffin… we need SW that works with all of our bodies…. Of course if a camera OEM made a SW that was head and shoulders above what the existent 3rd party raw SW was delivering then it might drive camera sales for that brand… Would probably only work out at each end of the spectrum though… You’d need to have a vast product range to convince folk that (say) Sony was the only camera manufacturer you needed to shop with or at the other end, be someone like Hasselblad, where if they deliver the best editing SW, then you’d probably use it even if all of your other cameras editing was done in a different app, simply because one didn’t by a MF Hassy to be frugal on the output! (At least I hope not!)

                • Frugality of output: I think this is actually a consequence of effort to capture/ process. You shoot less, but the quality goes up because there’s more pre- curation from a compositional standpoint occurring at capture. Think about the equivalent of asking yourself to print every image…and only printing the really, really good ones. Not a bad thing since it forces a general upping of one’s game…

        • I’m surely missing something here, so forgive me if I’ve got the wrong end of the stick. It seems that what you are suggesting is tethering the camera to your pc so that the pc can read the card whilst it is in the camera, the camera does all the editing, and only then to transfer the finished result over to the pc? Why not simply insert the card into a far more powerful pc with more computing power than a camera is ever likely to have and using the manufacturers’ own software let it do all the number crunching?

          In any case, at some stage you’re surely going to want to get those RAW’s off the card in the camera and onto a hard drive?

          • You might find that no computer can actually import, demosaic raw and turn it into a jpeg quicker than your camera can, especially if you use LR 😀

            Who said anything about not being able to take the raw files of the camera? That would be crazy!

            • You mean you still use Lightroom? :D)

              Yes, getting images off the card! Heresy. Mind you, it could add a new meaning to flushing images.

  15. I’ve always shot jpeg+RAW as I like, but don’t necessarily need, a back-up in case the SOOC image isn’t to my liking, usually with the WB setting. Yes, I know I can set WB in camera, but it is such a fiddle to remember to re-set it all the time.

    RAW is a tool that some need but I’d argue is less important to others. When I moved to Fuji a few of years ago I’d read about their SOOC images being virtually on a par with the RAW conversions (whether one likes their images or not is another matter) and, apart from bit depth, which in practise I can’t see anyway, I found this virtually the case with my own experiments. Out of habit, I do still shoot jpeg+RAW but now I find I can happily delete the RAW file once I’ve compared the SOOC images to a conversion.

    There are some areas of photography that rightfully demand utmost fidelity to the original and only RAW will do, bur for many of us this exactness isn’t needed; a best approximation, which SOOC mostly gives us, is more than acceptable. Interestingly, with the state of the art as it is with every respectable image editor sporting a RAW conversion module, Silkypix have recently released a jpeg only editor and which with its unique treatment of jpegs it claims it can extract more from a jpeg than hitherto. I haven’t yet got to grips with it, but from some initial trials with it it looks promising.

    • Thing is…I have to do so much work on a jpeg to get it to look the way I envision, I might as well start with raw to begin with. No penalty for storage these days 🙂

      • I suspect you are referring to the SilkyPix jpeg software here. I think I may not have been that clear why I mentioned it. I agree it doesn’t make sense to use it as a main workflow with jpeg’s, one may as well shoot RAW and be done with it. I doubt they envisage it being a substitute for RAW, it can’t be, but if it delivers on its promise then it seems it can do a better job at editing jpeg images than what has hitherto been possible. I purchased it, it was inexpensive, as potentially a means to “rescue” old jpeg’s that didn’t respond too well to the limited adjustments conventionally available.

        I’ve got a fair number of these from my initial foray into digital imaging. Whilst my first cameras could capture in RAW, they were very slow at saving the files, one took as long as 11 to 12 seconds per image, and 14 with RAW+jpeg, and with a small buffer locked up for ages. So, mostly it had to be jpeg only, unless I wished to specialise in snail or tortoise photography. :D)

        • There’s a whole raft – SilkyPix, Olympus Studio, Phocus, Capture One, Nikon NX etc…none of them can replace photoshop, and are all more inconvenient than a JPEG if usage is non-demanding – I guess the problem is they are all still compromised solutions. (We are working on something more workflow-centric for Hasselblad, but that’s another story…)

      • Exactly where I net out. I don’t find working on a RAW less time consuming than getting a JPEG the way I want it to look, not the way some engineer in Japan thinks it ought to look.

        • The latter part is the problem: it’s impossible for any engineer to guess what we might intend creatively. The best option is faithful/flexible/neutral and then we season to taste…some prefer more salt, others more pepper. 🙂

  16. jean pierre (pete) guaron says:

    Once again, a great article – and once again, my deepest thanks for sharing your knowledge & expertise, Ming.

    Being somewhat lower in the social scale of photographers than you, I can only (a) heartily agree and (b) put forward three reasons “why”:
    1 – Pixels ain’t photons – and the only guy with a sensor capable of capturing individual photons reckons it will NEVER make it to being installed in cameras.
    2 – Dynamic range in nature is presumably close on infinite. Dynamic range in a photograph is less than 100% of the light falling on the paper. Dynamic range in images projected on any form of computer or laptop or tablet or cellphone is also more restricted than in nature.
    3 – We capture colours in RGB and translate them through a printer into CYMK. The tonal range of colours that is capable of producing CANNOT match the tonal range of colours in nature. (And if anyone thinks it’s somehow “better” if the image remains in computers etc – no, it’s generally worse!)

    So what to do? What I do is simple – I try to create an “acceptable replica” of what I saw when I took the photo. And generally, without TOO much manipulation, I succeed. I wouldn’t presume to suggest mine are anywhere near the standard of yours, Ming – but they are graciously accepted by the people I take them for, who hang them up all over the place, and I can go home happy, to produce some more. Which is what I was in the middle of doing when I caught your post – actually it’s half a dozen sunrise shots in my street, from roughly outside my front door – looking towards the sunrise in the first few and away from it in the others. Taken because someone told me you can’t take “good shots” unless you move away from home base, because you don’t try as hard when the subject matter is too familiar – and being pigheaded and stubborn, I’ve been taking heaps of photos there, ever since, to see if I can prove that was wrong.

    • 1. I wouldn’t rule that out, though fundamentally we don’t need to capture 100% of everything to accurately give the emotional impression of a scene – we just need enough. And given we can do that already…it isn’t the capture medium that’s the problem as much as the output.

      2. True – but again it’s an impression rather than absolute: if we handle the rolloff in such a way as the small increments are imperceptible, then we might not miss the difference – especially if the underlying structure of the composition forces you to look at the subject anyway. On top of this, our eyes do not have infinite dynamic range.

      3. This is more not a problem. The mapping is good, but nowhere near perfect. I have a feeling monitors have a long way to go yet; in fact, there’s probably higher potential fidelity in a monitor since it’s transmissive and the total dynamic range isn’t limited to the ambient light falling on the paper.

      Acceptable replica is right: photography is about creating an impression, a feeling, and perhaps stimulating something deeper in the shared memory or consciousness. Whilst higher and higher fidelity helps sharpen the ideas and reduce ambiguity of the message, sometimes it’s not a bad thing to let the audience complete the story – after all, it’s probably analogous to why the book is always better than the movie…

  17. Michael Fleischer says:

    Great illustrations to prove your creative point – love the subtle catchlight on the foreground rock. This nicely points to the crusial question;
    what am I trying to express…and the skills to achieve it ;-). I hope not machines/programs will be able to replicate the complexity of humans in the nearby future – else why should we make free/creative choices! Sometimes though I miss the simplicity of making slides although each film had its obvious limitations…

    • I don’t think they ever will be able to: if you can reduce an image to an algorithm, you remove what makes an image interesting in the first place: the fact that it’s different from other images. It’s impossible to program ‘the difference’. This is a sort of version of the negative proof problem: whilst you can say ‘make an image that has a) b) and c), you can’t say ‘make an image if it doesn’t have a) b) and c)’ – that’s too broad. The human factor forms the curation filter…

  18. Steve Gombosi says:

    This strikes me as pretty close to the digital equivalent of Ansel Adams’s old dictum: “The negative is the score, the print is the performance.”

    Some truths are eternal, I guess, although they’re probably irrelevant if you’re shooting primarily for low-res output on uncalibrated screens.

    • The bit about uncalibrated screens is true. As for Ansel – effectively, if the information wasn’t captured, you can’t put it back in. But if it’s there, you can choose to discard it later.

  19. I used to think tweaking parameters was cheating. I no longer think that. When I was a kid learning photography before digital, i remember someone telling me, “It’s not necessarily your exposure. All great photographs are made (not taken) in the darkroom.
    I guess I subscribe these days to the notion that art isn’t necessarily in the eye of the beholder, rather, it is in the eye of the creator or the original feeler. That leads me to your words in this piece. I think the thing that will continue to keep image making as being artistic is that I doubt an algorithm will ever be capable of heart and soul. It is heart and soul that connects what the eye sees to the brain to visualize what we perceive individually and what we then “feel” emotionally.
    Processors and algorithms will continue to improve and that will help our ability to show others what we saw or to create images of things we imagine but it won’t be emotional.

    • Look at it this way: a JPEG is an engineer’s serving suggestion; it isn’t an absolute nor is it necessarily one’s personal preference. A bit like what you could cook from a cake mix, but not necessarily how you’d prefer it…

      • Do you know what, Ming? You could have saved yourself a lot of time and effort writing this interesting article with these two sentences alone. :D) But even where we bake our own cakes =RAW, the results don’t always turn out as we would wish = a more acceptable result than the SOOC jpeg.

        • You’re probably right, but then it would have been skipped over by even more people than it was already (not being a review and all 😉 )…

          • A review is easier to digest; your musings provoke more thought. Many prefer comics to a good book! :D)

            • This is modern photography in a nutshell: the gains are all through thinking and practice; this requires effort, and is far less popular than just buying something 🙂

  20. It has always intrigued me how cameras are (partially, at least) judged by their JPEG engine. With some, the JPEGS are said to be good enough for immediate display (certain of the Fuji and Olympus cameras seem to have this reputation) and with others you just have to shoot RAW (Leica’s M series seems to be a case in point).

    I recall you having conversations with Lloyd Chambers before. I think you’d be interested in what he says about the JPEGs from the Sigma DP Quattro 0. In so many words, he calls them “essentially lossless” and suggests that if you nail the white balance and exposure, they are all but indistinguishable from RAW. Certainly useful if you’re looking at using Sigma’s slow-as-a-glacier processing software.

    In one of Scott Kelby’s “Day with Jay Maisel” videos, Kelby asked Jay if he shoots RAW or JPEG. Maisel answered with “I shoot both. I shoot RAW because people tell me I have to shoot RAW, but I shoot bracketed JPEG at the highest quality because, on the back of the camera, RAW looks like s–t, I don’t want to spend any more time in front of a computer than I have to, and given that 99 per cent of my interaction with viewers is on a screen, the JPEG is easily good enough”. I’m paraphrasing somewhat, but that’s the essence of it. Accordingly, this also goes back to one of the things you have often mentioned in the past : output medium. I have no doubt that for big prints, RAW is the best way to go, but for most people who view things online – even on a decent sized monitor – a good JPEG engine will produce good enough results unless you’re a pixel peeper or exceptionally demanding when it comes to image quality.

    When I shoot with my iPhone, I never bother with the third party RAW apps. I tried them, and couldn’t really see enough of a difference to justify it. I do shoot RAW with my other cameras as I use your workflow and the profiles are useful. Nevertheless, I can certainly see the appeal of SOOC in certain circumstances.

    • Sigma: yes and no; a JPEG is always going to be 8 bit, and no matter how lossless – if you need to do any post processing manipulation, it’s not going to replace a raw file with more bits.

      I too shoot both, but only because sometimes we need to have quick client rushes – and even automatic batch processing of 100MP files isn’t efficient.

    • david mantripp says:

      Really ? Jay Maisel has a camera that shows RAW on the screen? I wonder who made that for him…