Search This Blog

Thursday, December 16, 2021

Camera phone

From Wikipedia, the free encyclopedia

Camera phones allow instant, automatic photo sharing. There is no need for a cable or removable card to connect to a desktop or laptop to transfer photos.

A camera phone is a mobile phone which is able to capture photographs and often record video using one or more built-in digital cameras. It can also send the resulting image wirelessly and conveniently. The first color commercial camera phone was the Kyocera Visual Phone VP-210, released in Japan in May 1999.

Most camera phones are smaller and simpler than the separate digital cameras. In the smartphone era, the steady sales increase of camera phones caused point-and-shoot camera sales to peak about 2010 and decline thereafter. The concurrent improvement of smartphone camera technology, and its other multifunctional benefits, have led to it gradually replacing compact point-and-shoot cameras.

Most modern smartphones only have a menu choice to start a camera application program and an on-screen button to activate the shutter. Some also have a separate camera button, for quickness and convenience. Few mobile phones such as the 2009 Samsung i8000 Omnia II have a two-level shutter button to resemble the point-and-shoot intuition from dedicated digital cameras. A few camera phones are designed to resemble separate low-end digital compact cameras in appearance and to some degree in features and picture quality, and are branded as both mobile phones and cameras - an example being Samsung Galaxy S4 Zoom.

The principal advantages of camera phones are cost and compactness; indeed for a user who carries a mobile phone anyway, the addition is negligible. Smartphones that are camera phones may run mobile applications to add capabilities such as geotagging and image stitching. Also, modern smartphones can use their touch screens to direct their camera to focus on a particular object in the field of view, giving even an inexperienced user a degree of focus control exceeded only by seasoned photographers using manual focus. However, the touch screen, being a general purpose control, lacks the agility of a separate camera's dedicated buttons and dial(s).

Starting in the mid-2010s, some advanced camera phones feature optical image stabilisation (OIS), larger sensors, bright lenses, 4K video and even optical zoom, for which a few use a physical zoom lens. Multiple lenses and multi-shot night modes are also familiar. Since the late 2010s, high-end smartphones typically have multiple lenses with different functions, to make more use of a device's limited physical space. Common lens functions include an ultrawide sensor, a telephoto sensor, a macro sensor, and a depth sensor. Some phone cameras have a label that indicates the lens manufacturer, megapixel count, or features such as autofocus or zoom ability for emphasis, including the Samsung Omnia II (2009), Samsung Galaxy S II (2011), Sony Xperia Z1 (2013) and some successors, Nokia Lumia 1020 (2013), and the Samsung Galaxy S20 (2020).

Technology

Camera

Mobile phone cameras typically feature CMOS active-pixel image sensors (CMOS sensors) due to largely reduced power consumption compared to charge-coupled device (CCD) type cameras, which few camera phones use. Some use CMOS back-illuminated sensors, which use even less energy, at higher price than CMOS and CCD.

The usual fixed-focus lenses and smaller sensors limit performance in poor lighting. Lacking a physical shutter, some have a long shutter lag. Photoflash by the typical internal LED source illuminates less intensely over a much longer exposure time than a flash strobe, and none has a hot shoe for attaching an external flash. Optical zoom and tripod screws are rare and some also lack a USB connection or a removable memory card. Most have Bluetooth and WiFi, and can make geotagged photographs. Some of the more expensive camera phones have only a few of these technical disadvantages, but with bigger image sensors (a few are up to 1", such as the Panasonic Lumix DMC-CM1), their capabilities approach those of low-end point-and-shoot cameras. The few hybrid camera phones such as Samsung Galaxy S4 Zoom and K Zoom were equipped with real optical zoom lenses.

Samsung Galaxy S5 camera module, with floating element group suspended by ceramic bearings and a small magnet
 
Image showing the six molded elements in the Samsung Galaxy S5

As camera phone technology has progressed, lens design has evolved from a simple double Gauss or Cooke triplet to many molded plastic aspheric lens elements made with varying dispersion and refractive indexes. Some phone cameras also apply distortion (optics), vignetting, and various optical aberration corrections to the image before it is compressed into a jpeg format.

Few smartphones such as LG initially with the 2014 G3 are equipped with a time-of-flight camera with infrared laser beam assisted auto focus. A thermal imaging camera has initially been implemented in 2016 on the Caterpillar S60.

High dynamic range imaging merges multiple images with different exposure values for a balanced brightness across the image and was first implemented in early 2010s smartphones such as the Samsung Galaxy S III and iPhone 5. The earliest known smartphone to feature high dynamic range filming is the Sony Xperia Z, 2013, where frames are arrayed by changing the exposure every two lines of pixels to create a spatially varying exposure (SVE).

As of 2019, high-end camera phones can also produce advance video with capability up to 4K at 60 frames per second for smoothness.

Zooming

Most camera phones have a digital zoom feature, which may allow zooming without quality loss if a lower resolution than the highest image sensor resolution is selected, as it makes use of image sensors' spare resolution. For example, at twice digital zoom, only a quarter of the image sensor resolution is available. A few have optical zoom, and several have a few cameras with different field of view, combined with digital zoom as a hybrid zoom feature. For example, the Huawei P30 Pro uses a "periscope" 5x telephoto camera with up to 10x digital zoom, resulting in 50x hybrid zoom. An external camera can be added, coupled wirelessly to the phone by Wi-Fi. They are compatible with most smartphones. Windows Phones can be configured to operate as a camera even if the phone is asleep.

Physical location

When viewed vertically from behind, the rear camera module on some mobile phones is located in the top center, while other mobile phones have cameras located in the upper left corner. The latter has benefits in terms of ergonomy due to the lower likelihood of covering and soiling the lens when held horizontally, as well as more efficient packing of tight physical device space due to neighbouring components not having to be built around the lens.

Image format and mode

Images are usually saved in the JPEG file format. Since the mid-2010s, some high-end camera phones have a RAW photography feature, HDR, and "Bokeh mode". Phones with Android 5.0 Lollipop and later versions can install phone apps that provide similar features.

Audio recording

Mobile phones with multiple microphones usually allow video recording with stereo audio. Samsung, Sony, and HTC initially implemented it in 2012 on their Samsung Galaxy S3, Sony Xperia S, and HTC One X. Apple implemented stereo audio starting with the 2018 iPhone Xs family and iPhone XR.

Files and directories

Like dedicated (stand-alone) digital cameras, mobile phone camera software usually stores pictures and video files in a directory called DCIM/ in the internal memory, with numbered or dated file names. The former prevents missing out files during file transfers and facilitates counting files, while the latter facilitates searching files by date/time, regardless of file attribute resets during transfer and possible lack of in-file metadata date/time information .

Some can store this media in external memory (secure digital card or USB on the go pen drive).

Multimedia Messaging Service

A camera phone sending a photo taken by it using MMS

Camera phones can share pictures almost instantly and automatically via a sharing infrastructure integrated with the carrier network. Early developers including Philippe Kahn envisioned a technology that would enable service providers to "collect a fee every time anyone snaps a photo". The resulting technologies, Multimedia Messaging Service (MMS) and Sha-Mail, were developed parallel to and in competition to open Internet-based mobile communication provided by GPRS and later 3G networks.

The first commercial camera phone complete with infrastructure was the J-SH04, made by Sharp Corporation; it had an integrated CCD sensor, with the Sha-Mail (Picture-Mail in Japanese) infrastructure developed in collaboration with Kahn's LightSurf venture, and marketed from 2001 by J-Phone in Japan today owned by Softbank.It was also the world's first cellular mobile camera phone. The first commercial deployment in North America of camera phones was in 2004. The Sprint wireless carriers deployed over one million camera phones manufactured by Sanyo and launched by the PictureMail infrastructure (Sha-Mail in English) developed and managed by LightSurf.

While early phones had Internet connectivity, working web browsers and email-programs, the phone menu offered no way of including a photo in an email or uploading it to a web site. Connecting cables or removable media that would enable the local transfer of pictures were also usually missing. Modern smartphones have almost unlimited connectivity and transfer options with photograph attachment features.

External camera

During 2003 (as camera phones were gaining popularity), in Europe some phones without cameras had support for MMS and external cameras that could be connected with a small cable or directly to the data port at the base of the phone. The external cameras were comparable in quality to those fitted on regular camera phones at the time, typically offering VGA resolution.

One of these examples was the Nokia Fun Camera (model number PT-3) announced together with the Nokia 3100 in June 2003. The idea was for it to be used on devices without a built-in camera (connected via the Pop-Port interface) and be able to transfer images taken on the camera (VGA resolution and a flash) directly to the phone to be stored or sent via MMS.

In 2013-2014 Sony and other manufacturers announced add-on camera modules for smartphones called lens-style cameras. They have larger sensors and lenses than those in a camera phone but lack a viewfinder, display and most controls. They can be mounted to an Android or iOS phone or tablet and use its display and controls. Lens-style cameras include:

  • Sony SmartShot QX series, announced and released in mid 2013. They include the DSC-QX100/B, the large Sony ILCE-QX1, and the small Sony DSC-QX30.
  • Kodak PixPro smart lens camera series, announced in 2014.
  • Vivicam smart lens camera series from Vivitar/Sakar, announced in 2014.
  • HTC RE HTC also announced an external camera module for smartphones, which can capture 16 MP still shots and Full HD videos. The RE Module is also waterproof and dustproof, so it can be used in a variety of conditions.

External cameras for thermal imaging also became available in late 2014.

Microscope attachments were available from several manufacturers in 2019, as are adapters for connecting an astronomical telescope.

Limitations

Camera phone clamped to a tripod
  • Mobile phone form factors are small. They lack space for a large image sensor and dedicated knobs and buttons for easier ergonomy.
  • Controls work by a touchscreen menu system. The photographer must look at the menu instead of looking at the target.
  • Dedicated cameras have a compartment housing the memory card and battery. For most it is easily accessible by hand, allowing uninterrupted operation when storage or energy is exhausted (hot swapping). Meanwhile, the battery can be charged externally. Most mobile phones have a non-replaceable battery and many lack a memory card slot entirely. Others have a memory card slot inside a tray, requiring a tool for access.
  • Mobile phone operating systems are not able to boot immediately like the firmwares of dedicated digital cameras/camcorders, and are prone to interference from processes running in background.
  • Dedicated digital cameras, even low-budget ones, are typically equipped with a photoflash capacitor-discharging Xenon flash, larger and by far more powerful than LED lamps found on mobile phones.
  • Due to the default orientation of mobile phones being vertical, inexperienced users might intuitively be encouraged to film vertically, making a portrait mode poorly suited to the usual horizontal screens used at home.
  • Due to their comparatively thin form factor, smartphones are typically unable to stand upright on their own and must be leaned, whereas dedicated digital cameras and camcorders typically have a flat bottom that lets them stand upright.
  • Smartphones lack dedicated stable tripod mounts, and can only be mounted through a less stable device that grips the unit's edges.

Software

Users may use bundled camera software, or install alternative software.

The graphical user interface typically features a virtual on-screen shutter button located towards the usual home button and charging port side, and a thumbnail previewing the last photo. There may be an option to utilize volume keys for photo, video, or zoom. Specific objects can usually be focused on by tapping on the viewfinder, and exposure may adjust accordingly; there may be an option to capture a photo with each tap.

Exposure value may be adjustable by swiping vertically after tapping to focus or through a separate menu option. It may be possible to lock focus and exposure by holding the touch for a short time, and exposure value may remain adjustable in this state. These gestures may be available while filming and for the front camera.

Retaining focus also has in the past been implemented through holding the virtual shutter button. Another common use of holding the shutter button is burst shot, where multiple photos are captured in quick succession, with varying resolutions, speeds, and sequential limits among devices, and possibly with an option to adjust between speed and resolution.

Lock screens typically allow the user to launch the camera without unlocking to prevent missing moments. This may be implemented through an icon swiped away from. Launching from anywhere may be possible through double-press of stand-by or home button, or a dedicated shutter button if present.

Camera software on more recent and higher-end smartphones (e.g. Samsung since 2015) allows for more manual control of parameters such as exposure and focus. This was first featured in 2013 on the camera-centric Samsung Galaxy S4 Zoom and Nokia Lumia 1020, but was later expanded among smartphones.

The camera software may indicate the estimated number of remaining photographs until exhausted space, the current video file size, and remaining space storage while recording, as done on early-2010s Samsung smartphones.

Video recording

Video recording may be implemented as a separate camera mode, or merged on the first viewfinder page as done since the Samsung Galaxy S4 until the S9. Specific resolutions may be implemented as separate camera mode, like Sony has done with 4K (2160p) on the Xperia Z2.

During video recording, it may be possible to capture still photos, possibly with a higher resolution than the video itself. For example, the Samsung Galaxy S4 captures still photos during video recording at 9.6 Megapixels, which is the largest 16:9 aspect ratio crop of the 13-Megapixel 4:3 image sensor.

Parameters adjustable during video recording may include flashlight illumination, focus, exposure, light sensitivity (ISO), and white balance. Some settings may only be adjustable while idle and locked while filming, such as light sensitivity and exposure on the Samsung Galaxy S7.

Recording time may be limited by software to fixed durations at specific resolutions, after which recording can be restarted. For example, 2160p (4K) recording is capped to five minutes on Samsung flagship smartphones released before 2016, ten minutes on the Galaxy Note 7, four minutes on the Galaxy Alpha, and six minutes on the HTC One M9. The camera software may temporarily disable recording while a high device temperature is detected.

"Slow motion" (high frame rate) video may be stored as real-time video which retains the original image sensor frame rate and audio track, or slowed down and muted. While the latter allows slow-motion playback on older video player software which lacks playback speed control, the former can act both as real-time video and as slow-motion video, and is preferable for editing as the playback speed and duration indicated in the video editor are real-life equivalent.

Settings menu

Camera settings may appear as a menu on top of an active viewfinder in the background, or as a separate page, the former of which allows returning to the viewfinder immediately without having to wait for it to initiate again. The settings may appear as a grid or a list. On Apple iOS, some camera settings such as video resolution are located separately in the system settings, outside the camera application.

The range of selectable resolution levels for photos and videos varies among camera software. There may be settings for frame rate and bit rate, as on the LG V10, where they are implemented independently within a supported pixel rate (product of resolution and frame rate).

When the selected photo or video resolution is below that of the image sensor, digital zooming may allow limited magnification without quality loss by cropping into the image sensor's spare resolution. This is known as "lossless digital zoom". Zooming is typically implemented through pinch and may additionally be controllable through a slider. On early-2010s Samsung Galaxy smartphones, a square visualized the magnification.

Other functionality

The ability to take photographs and film from both front and rear cameras simultaneously was first implemented in 2013 on the Samsung Galaxy S4, where the two video tracks are stored picture-in-picture. An implementation with separate video tracks within a file or separate video files is not known yet.

High-dynamic-range imaging, also referred to as "rich tone", keeps brightness across the image within a visible range. Camera software may have an option for automatically toggling it depending on necessity. Deacativated HDR may be desirable as HDR may add shutter lag, ghosting from merged photos, and discard EXIF meta data. A possible option allows retaining both HDR and non-HDR variants of the same photo. HDR may be supported for panorama shots and video recording, if supported by the image sensor.

Voice commands were first featured in 2012 on the camera software of the Samsung Galaxy S3, and the ability to take a photo after a short countdown initiated by hand gesture was first featured in 2015 on the Galaxy S6.

The camera effects introduced by Samsung on the Galaxy S3 or S4 including "best photo" which automatically picks a photo and "drama shot" for multiplying moving objects and "eraser" which can remove moving objects, were merged to "shot & more" on the Galaxy S5, allowing retrospectively applying them to a burst of eight images stored in a single file.

In 2014, HTC implemented several visual effect features as part of their dual-camera setup on the One M8, including weather, 3D tilting, and focus adjustment after capture, branded "uFocus". The last was branded "Selective Focus" by Samsung, additionally with the "pan focus" option to make the entire depth of field appear in focus.

Camera software may have an option for automatically capturing a photograph or video when launched.

Some smartphones since the mid-2010s have the ability to attach short videos surrounding or following the moment to a photo. Apple has branded this feature as "live photo", and Samsung as "motion photo".

Shortcuts to settings in the camera viewfinder may be customizable.

A "remote viewfinder" feature has been implemented into few smartphones' camera software (Samsung Galaxy S4, S4 Zoom, Note 3, S5, K Zoom, Alpha), where the viewfinder and camera controls are cast to a supported device through WiFi Direct.

An artificial intelligence that notifies of flaws after each photograph such as blinking eyes, misfocus, blur, and shake, was first implemented in 2018 on the Samsung Galaxy Note 9. Later phones from other manufacturers have more advanced IA features.

History

The J-SH04, developed by Sharp and released by J-Phone in 2000, was the first mass-market camera phone.

The camera phone, like many complex systems, is the result of converging and enabling technologies. Compared to digital cameras, a consumer-viable camera in a mobile phone would require far less power and a higher level of camera electronics integration to permit the miniaturization.

The metal-oxide-semiconductor (MOS) active pixel sensor (APS) was developed by Tsutomu Nakamura at Olympus in 1985. The complementary MOS (CMOS) active pixel sensor (CMOS sensor) "camera-on-a-chip" was later developed by Eric Fossum and his team in the early 1990s. This was an important step towards realizing the modern camera phone as described in a March 1995 Business Week article. While the first camera phones (e.g. J-SH04) successfully marketed by J-Phone in Japan used charge-coupled device (CCD) sensors rather than CMOS sensors, more than 90% of camera phones sold today use CMOS image sensor technology.

Another important enabling factor was advances in data compression, due to the impractically high memory and bandwidth requirements of uncompressed media. The most important compression algorithm is the discrete cosine transform (DCT), a lossy compression technique that was first proposed by Nasir Ahmed while he was working at the University of Texas in 1972. Camera phones were enabled by DCT-based compression standards, including the H.26x and MPEG video coding standards introduced from 1988 onwards, and the JPEG image compression standard introduced in 1992.

Experiments

There were several early videophones and cameras that included communication capability. Some devices experimented with integration of the device to communicate wirelessly with the Internet, which would allow instant media sharing with anyone anywhere. The DELTIS VC-1100 by Japanese company Olympus was the world's first digital camera with cellular phone transmission capability, revealed in the early 1990s and released in 1994. In 1995, Apple experimented with the Apple Videophone/PDA. There was also a digital camera with cellular phone designed by Shosaku Kawashima of Canon in Japan in May 1997. In Japan, two competing projects were run by Sharp and Kyocera in 1997. Both had cell phones with integrated cameras. However, the Kyocera system was designed as a peer-to-peer video-phone as opposed to the Sharp project which was initially focused on sharing instant pictures. That was made possible when the Sharp devices was coupled to the Sha-mail infrastructure designed in collaboration with American technologist Kahn. The Kyocera team was led by Kazumi Saburi. In 1995, work by James Greenwold of Bureau Of Technical Services, in Chippewa Falls, Wisconsin, was developing a pocket video camera for surveillance purposes. By 1999, the Tardis recorder was in prototype and being used by the government. Bureau Of Technical Services advanced further by the patent No. 6,845,215,B1 on "Body-Carryable, digital Storage medium, Audio/Video recording Assembly".

A camera phone was patented by Kari-Pekka Wilska, Reijo Paajanen, Mikko Terho and Jari Hämäläinen, four employees at Nokia, in 1994. Their patent application was filed with the Finnish Patent and Registration Office on May 19, 1994, followed by several filings around the world making it a global family of patent applications. The patent application specifically described the combination as either a separate digital camera connected to a cell phone or as an integrated system with both sub-systems combined in a single unit. Their patent application design included all of the basic functions camera phones implemented for many years: the capture, storage, and display of digital images and the means to transmit the images over the radio frequency channel. On August 12, 1998, the United Kingdom granted patent GB 2289555B and on July 30, 2002, the USPTO granted US Patent 6427078B1 based on the original Finnish Patent and Registration Office application to Wilska, Paajanen, Terho and Hämäläinen.

The photo taken by Philippe Kahn on June 11th, 1997

On June 11, 1997, Philippe Kahn instantly shared the first pictures from the maternity ward where his daughter Sophie was born. In the hospital waiting room he devised a way to connect his laptop to his digital camera and to his cell phone for transmission to his home computer. This improvised system transmitted his pictures to more than 2,000 family, friends and associates around the world. Kahn's improvised connections augured the birth of instant visual communications. Kahn's cell phone transmission is the first known publicly shared picture via a cell phone.

Commercialization

5-megapixel camera phones introduced in 2007: Nokia N95, LG Viewty, Samsung SGH-G800, Sony Ericsson K850i; they were marketed as having advanced cameras

The first commercial camera phone was the Kyocera Visual Phone VP-210, released in Japan in May 1999. It was called a "mobile videophone" at the time, and had a 110,000-pixel front-facing camera. It stored up to 20 JPEG digital images, which could be sent over e-mail, or the phone could send up to two images per second over Japan's Personal Handy-phone System (PHS) cellular network. The Samsung SCH-V200, released in South Korea in June 2000, was also one of the first phones with a built-in camera. It had a TFT liquid-crystal display (LCD) and stored up to 20 digital photos at 350,000-pixel resolution. However, it could not send the resulting image over the telephone function, but required a computer connection to access photos. The first mass-market camera phone was the J-SH04, a Sharp J-Phone model sold in Japan in November 2000. It could instantly transmit pictures via cell phone telecommunication.

Cameras on cell phones proved popular right from the start, as indicated by the J-Phone in Japan having had more than half of its subscribers using cell phone cameras in two years. The world soon followed. In 2003, more camera phones were sold worldwide than stand-alone digital cameras largely due to growth in Japan and Korea. In 2005, Nokia became the world's most sold digital camera brand. In 2006, half of the world's mobile phones had a built-in camera. In 2006, Thuraya released the first satellite phone with an integrated camera. The Thuraya SG-2520 was manufactured by Korean company APSI and ran Windows CE. In 2008, Nokia sold more camera phones than Kodak sold film-based simple cameras, thus becoming the biggest manufacturer of any kind of camera. In 2010, the worldwide number of camera phones totaled more than a billion. Since 2010, most mobile phones, even cheapest ones, are being sold with a camera. High-end camera phones usually had a relatively good lens and high resolution.

A multi-camera foldable smartphone
 
Including an under-display camera
 
The Nokia N8 smartphone is the first Nokia smartphone with a 12-megapixel autofocus lens, it features Carl Zeiss optics with xenon flash. The label indicates the lens manufacturer, megapixel count, aperture, and autofocus ability.

Higher resolution camera phones started to appear in the 2010s. 12-megapixel camera phones have been produced by at least two companies. To highlight the capabilities of the Nokia N8 (Big CMOS Sensor) camera, Nokia created a short film, The Commuter, in October 2010. The seven-minute film was shot entirely on the phone's 720p camera. A 14-megapixel smartphone with 3× optical zoom was announced in late 2010. In 2011, the first phones with dual rear cameras were released to the market but failed to gain traction. Originally, dual rear cameras were implemented as a way to capture 3D content, which was something that electronics manufacturers were pushing back then. Several years later, the release of the iPhone 7 would popularize this concept, but instead using the second lens as a wide angle lens. In 2012, Nokia announced Nokia 808 PureView. It features a 41-megapixel 1/1.2-inch sensor and a high-resolution f/2.4 Zeiss all-aspherical one-group lens. It also features Nokia's PureView Pro technology, a pixel oversampling technique that reduces an image taken at full resolution into a lower resolution picture, thus achieving higher definition and light sensitivity, and enables lossless zoom. In mid-2013, Nokia announced the Nokia Lumia 1020. In 2014, the HTC one M8 introduced the concept of having a camera as a depth sensor. In late 2016, Apple introduced the iPhone 7 Plus, one of the phones to popularize a dual camera setup. The iPhone 7 Plus included a main 12 MP camera along with a 12 MP telephoto camera which allowed for 2x optical zoom and Portrait Mode for the first time in a smartphone. In early 2018 Huawei released a new flagship phone, the Huawei P20 Pro, with the first triple camera lens setup. Making up its three sensors (co-engineered with Leica) are a 40 megapixel RGB lens, a 20 megapixel monochrome lens, and an 8 megapixel telephoto lens. Some features on the Huawei P20 Pro include 3x optical zoom, and 960 fps slow motion. In late 2018, Samsung released a new mid-range smartphone, the Galaxy A9 (2018) with the world's first quad camera setup. The quadruple camera setup features a primary 24MP f/1.7 sensor for normal photography, an ultra-wide 8MP f/2.4 sensor with a 120 degrees viewing angle, a telephoto 10MP f/2.4 with 2x optical zoom and a 5MP depth sensor for effects such as Bokeh. Nokia 9 PureView was released in 2019 featuring penta-lens camera system.

The Huawei Mate 40 RS features penta-camera lenses with Leica optics.
 
The OnePlus 9 features upgraded optics with Hasselblad.

In 2019, Samsung announced the Galaxy A80, which has only rear cameras. When the user wants to take a selfie, the cameras automatically slide out of the back and rotate towards the user. This is known as a pop-up camera, and it allows smartphone displays to cover the entire front of the phone body without a notch or a punch hole on the top of the screen. Samsung, Xiaomi, OnePlus, and other manufacturers adopted a system where the camera "pops" out of the phone's body. Also in 2019, Samsung developed and began commercialization of 64 and 108-megapixel cameras for phones. The 108 MP sensor was developed in cooperation with Chinese electronics company Xiaomi and both sensors are capable of pixel binning, which combines the signals of 4 or 9 pixels, and makes the 4 or 9 pixels act as a single, larger pixel. A larger pixel can capture more light (resulting in a higher ISO rating and lower image noise). Under display cameras are under development, which would place a camera under a special display that would allow the camera to see through the display.

Manufacturers

Major manufacturers of cameras for phones include Sony, Toshiba, ST Micro, Sharp, Omnivision, and Aptina (Now part of ON Semiconductor).

Social impact

Taking a photograph with a cell phone
 
Taking a photo on a smartphone in landscape mode

Personal photography allows people to capture and construct personal and group memory, maintain social relationships as well as expressing their identity. The hundreds of millions of camera phones sold every year provide the same opportunities, yet these functions are altered and allow for a different user experience. As mobile phones are constantly carried, they allow for capturing moments at any time. Mobile communication also allows for immediate transmission of content (for example via Multimedia Messaging Services), which cannot be reversed or regulated. Brooke Knight observes that "the carrying of an external, non-integrated camera (like a DSLR) always changes the role of the wearer at an event, from participant to photographer". The camera phone user, on the other hand, can remain a participant in whatever moment they photograph. Photos taken on a camera phone serve to prove the physical presence of the photographer. The immediacy of sharing and the liveness that comes with it allows the photographs shared through camera phones to emphasize their indexing of the photographer.

While phones have been found useful by tourists and for other common civilian purposes, as they are cheap, convenient, and portable; they have also posed controversy, as they enable secret photography. A user may pretend to be simply talking on the phone or browsing the internet, drawing no suspicion while photographing a person or place in non-public areas where photography is restricted, or against that person's wishes. Camera phones have enabled everyone to exercise freedom of speech by quickly communicating to others what they see with their own eyes. In most democratic free countries, there are no restrictions against photography in public and thus camera phones enable new forms of citizen journalism, fine art photography, and recording one's life experiences for facebooking or blogging.

Camera phones have also been very useful to street photographers and social documentary photographers as they enable them to take pictures of strangers in the street without them noticing, thus allowing the artist/photographer to get close to subjects and take more lively photos. While most people are suspect of secret photography, artists who do street photography (like Henri Cartier-Bresson did), photojournalists and photographers documenting people in public (like the photographers who documented the Great Depression in 1930s America) must often work unnoticed as their subjects are often unwilling to be photographed or are not aware of legitimate uses of secret photography like those photos that end up in fine art galleries and journalism.

As a network-connected device, megapixel camera phones are playing significant roles in crime prevention, journalism and business applications as well as individual uses. They can also be used for activities such as voyeurism, invasion of privacy, and copyright infringement. Because they can be used to share media almost immediately, they are a potent personal content creation tool.

Camera phones limit the "right to be let alone", since this recording tool is always present. A security bug can allow attackers to spy on users through a phone camera.

In January 2007, New York City Mayor Michael Bloomberg announced a plan to encourage people to use their camera phones to capture crimes happening in progress or dangerous situations and send them to emergency responders. The program enables people to send their images or video directly to 911. The service went live in 2020.

Camera phones have also been used to discreetly take photographs in museums, performance halls, and other places where photography is prohibited. However, as sharing can be instantaneous, even if the action is discovered, it is too late, as the image is already out of reach, unlike a photo taken by a digital camera that only stores images locally for later transfer. However, as the newer digital cameras support Wi-Fi, a photographer can perform photography with a DSLR and instantly post the photo on the internet through the mobile phone's Wi-Fi and 3G capabilities.

Apart from street photographers and social documentary photographers or cinematographers, camera phones have also been used successfully by war photographers. The small size of the camera phone allows a war photographer to secretly film the men and women who fight in a war, without them realizing that they have been photographed, thus the camera phone allows the war photographer to document wars while maintaining her or his safety.

In 2010, in Ireland the annual "RTÉ 60 second short award" was won by 15-year-old Laura Gaynor, who made her winning cartoon,"Piece of Cake" on her Sony Ericsson C510 camera phone. In 2012, director and writer Eddie Brown Jr. made the reality thriller Camera Phone, one of the first commercial produced movies using camera phones as the story's perspective. The film is a reenactment of an actual case, and the names were changed to protect those involved. Some modern camera phones (in 2013–2014) have big sensors, thus allowing a street photographer or any other kind of photographer to take photos of similar quality to a semi-professional camera.

Camera as an interaction device

The cameras of smartphones are used as input devices in numerous research projects and commercial applications. A commercially successful example is the use of QR codes attached to physical objects. QR codes can be sensed by the phone using its camera and provide an according link to related digital content, usually a URL. Another approach is using camera images to recognize objects. Content-based image analysis is used to recognize physical objects such as advertisement posters to provide information about the object. Hybrid approaches use a combination of un-obtrusive visual markers and image analysis. An example is to estimate the pose of the camera phone to create a real-time overlay for a 3D paper globe.

Some smartphones can provide an augmented reality overlay for 2D objects and to recognize multiple objects on the phone using a stripped down object recognition algorithm as well as using GPS and compass. A few can translate text from a foreign language.  Auto-geotagging can show where a picture is taken, promoting interactions and allowing a photo to be mapped with others for comparison.

Smartphones can use their front camera (of lesser performance as compared to rear camera) facing the user for purposes like self-portraiture (selfie) and videoconferencing.

Smartphones can usually not fixed on a tripod, which can make problems at filming or at taking pictures with long exposure times.

A bystander uses his camera phone to record a skateboarder at LES skatepark, 2019

Laws

Camera phones, or more specifically, widespread use of such phones as cameras by the general public, has increased exposure to laws relating to public and private photography. The laws that relate to other types of cameras also apply to camera phones. There are no special laws for camera phones. Enforcing bans on camera phones has proven nearly impossible. They are small and numerous and their use is easy to hide or disguise, making it hard for law enforcement and security personnel to detect or stop use. Total bans on camera phones would also raise questions about freedom of speech and the freedom of the press, since camera phone ban would prevent a citizen or a journalist (or a citizen journalist) from communicating to others a newsworthy event that could be captured with a camera phone.

From time to time, organizations and places have prohibited or restricted the use of camera phones and other cameras because of the privacy, security, and copyright issues they pose. Such places include the Pentagon, federal and state courts, museums, schools, theaters, and local fitness clubs. Saudi Arabia, in April 2004, banned the sale of camera phones nationwide for a time before reallowing their sale in December 2004 (although pilgrims on the Hajj were allowed to bring in camera phones). There is the occasional anecdote of camera phones linked to industrial espionage and the activities of paparazzi (which are legal but often controversial), as well as some hacking into wireless operators' network.

Notable events involving camera phones

  • The 2004 Indian Ocean earthquake was the first global news event where the majority of the first day news footage was no longer provided by professional news crews, but rather by citizen journalists, using primarily camera phones.
  • On November 17, 2006, during a performance at the Laugh Factory comedy club, comedian Michael Richards was recorded responding to hecklers with racial slurs by a member of the audience using a camera phone. The video was widely circulated in television and internet news broadcasts.
  • On December 30, 2006, the execution of former Iraqi dictator Saddam Hussein was recorded by a video camera phone, and made widely available on the Internet. A guard was arrested a few days later.
  • Camera phone video and photographs taken in the immediate aftermath of the 7 July 2005 London bombings were featured worldwide. CNN executive Jonathan Klein predicts camera phone footage will be increasingly used by news organizations.
  • Camera phone digital images helped to spread the 2009 Iranian election protests.
  • Camera phones recorded the BART Police shooting of Oscar Grant.

Camera phone photography

"Storm is coming", an example of iPhoneography

Photography produced specifically with phone cameras has become an art form in its own right. Work in this genre is sometimes referred to as iPhoneography (whether for photographs taken with an iPhone, or any brand of smart phone). The movement, though already a few years old, became mainstream with the advent of the iPhone and its App Store which provided better, easier, and more creative tools for people to shoot, process, and share their work.

Reportedly, the first gallery exhibition to feature iPhoneography exclusively opened on June 30, 2010: "Pixels at an Exhibition" was held in Berkeley, California, organized and curated by Knox Bronson and Rae Douglass. Around the same time, the photographer Damon Winter used Hipstamatic to make photos of the war in Afghanistan. A collection of these was published November 21, 2010 in the New York Times in a series titled "A Grunt's Life", earning an international award (3rd) sponsored by RJI, Donald W. Reynolds Journalism Institute. Also in Afghanistan, in 2011, photojournalist David Guttenfelder used an iPhone and the Polarize application. In 2013, National Geographic published a photo feature in which phoneographer Jim Richardson used his iPhone 5s to photograph the Scottish Highlands.

Photographic film

From Wikipedia, the free encyclopedia
Undeveloped 35 mm, ISO 125/22°, black and white negative film

Photographic film is a strip or sheet of transparent film base coated on one side with a gelatin emulsion containing microscopically small light-sensitive silver halide crystals. The sizes and other characteristics of the crystals determine the sensitivity, contrast, and resolution of the film.

The emulsion will gradually darken if left exposed to light, but the process is too slow and incomplete to be of any practical use. Instead, a very short exposure to the image formed by a camera lens is used to produce only a very slight chemical change, proportional to the amount of light absorbed by each crystal. This creates an invisible latent image in the emulsion, which can be chemically developed into a visible photograph. In addition to visible light, all films are sensitive to ultraviolet light, X-rays and gamma rays, and high-energy particles. Unmodified silver halide crystals are sensitive only to the blue part of the visible spectrum, producing unnatural-looking renditions of some colored subjects. This problem was resolved with the discovery that certain dyes, called sensitizing dyes, when adsorbed onto the silver halide crystals made them respond to other colors as well. First orthochromatic (sensitive to blue and green) and finally panchromatic (sensitive to all visible colors) films were developed. Panchromatic film renders all colors in shades of gray approximately matching their subjective brightness. By similar techniques, special-purpose films can be made sensitive to the infrared (IR) region of the spectrum.

In black-and-white photographic film, there is usually one layer of silver halide crystals. When the exposed silver halide grains are developed, the silver halide crystals are converted to metallic silver, which blocks light and appears as the black part of the film negative. Color film has at least three sensitive layers, incorporating different combinations of sensitizing dyes. Typically the blue-sensitive layer is on top, followed by a yellow filter layer to stop any remaining blue light from affecting the layers below. Next comes a green-and-blue sensitive layer, and a red-and-blue sensitive layer, which record the green and red images respectively. During development, the exposed silver halide crystals are converted to metallic silver, just as with black-and-white film. But in a color film, the by-products of the development reaction simultaneously combine with chemicals known as color couplers that are included either in the film itself or in the developer solution to form colored dyes. Because the by-products are created in direct proportion to the amount of exposure and development, the dye clouds formed are also in proportion to the exposure and development. Following development, the silver is converted back to silver halide crystals in the bleach step. It is removed from the film during the process of fixing the image on the film with a solution of ammonium thiosulfate or sodium thiosulfate (hypo or fixer). Fixing leaves behind only the formed color dyes, which combine to make up the colored visible image. Later color films, like Kodacolor II, have as many as 12 emulsion layers, with upwards of 20 different chemicals in each layer. Photographic film and film stock tend to be similar in composition and speed, but often not in other parameters such as frame size and length. Silver halide photographic paper is also similar to photographic film.

Characteristics of film

Film basics

Layers of 35 mm color film:
  1. Film base
  2. Subbing layer
  3. Red light sensitive layer
  4. Green light sensitive layer
  5. Yellow filter
  6. Blue light sensitive layer
  7. UV Filter
  8. Protective layer
  9. Visible light exposing film

There are several types of photographic film, including:

  • Print film, when developed, yields transparent negatives with the light and dark areas and colors (if color film is used) inverted to their respective complementary colors. This type of film is designed to be printed onto photographic paper, usually by means of an enlarger but in some cases by contact printing. The paper is then itself developed. The second inversion that results restores light, shade and color to their normal appearance. Color negatives incorporate an orange color correction mask that compensates for unwanted dye absorptions and improves color accuracy in the prints. Although color processing is more complex and temperature-sensitive than black-and-white processing, the wide availability of commercial color processing and scarcity of service for black-and-white prompted the design of some black-and-white films which are processed in exactly the same way as standard color film.
  • Color reversal film produces positive transparencies, also known as diapositives. Transparencies can be reviewed with the aid of a magnifying loupe and a lightbox. If mounted in small metal, plastic or cardboard frames for use in a slide projector or slide viewer they are commonly called slides. Reversal film is often marketed as "slide film". Large-format color reversal sheet film is used by some professional photographers, typically to originate very-high-resolution imagery for digital scanning into color separations for mass photomechanical reproduction. Photographic prints can be produced from reversal film transparencies, but positive-to-positive print materials for doing this directly (e.g. Ektachrome paper, Cibachrome/Ilfochrome) have all been discontinued, so it now requires the use of an internegative to convert the positive transparency image into a negative transparency, which is then printed as a positive print.
  • Black-and-white reversal film exists but is very uncommon. Conventional black-and-white negative film can be reversal-processed to produce black-and-white slides, as by dr5 Chrome. Although kits of chemicals for black-and-white reversal processing may no longer be available to amateur darkroom enthusiasts, an acid bleaching solution, the only unusual component which is essential, is easily prepared from scratch. Black-and-white transparencies may also be produced by printing negatives onto special positive print film, still available from some specialty photographic supply dealers.

In order to produce a usable image, the film needs to be exposed properly. The amount of exposure variation that a given film can tolerate, while still producing an acceptable level of quality, is called its exposure latitude. Color print film generally has greater exposure latitude than other types of film. Additionally, because print film must be printed to be viewed, after-the-fact corrections for imperfect exposure are possible during the printing process.

Plot of image density (D) vs. log exposure (H), yields a characteristic S-curve (H&D curve) for each type of film to determine its sensitivity. Changing the emulsion properties or the processing parameters will move the curve to the left or right. Changing the exposure will move along the curve, helping to determine what exposure is needed for a given film. Note the non-linear response at the far left ("toe") and right ("shoulder") of the curve.

The concentration of dyes or silver halide crystals remaining on the film after development is referred to as optical density, or simply density; the optical density is proportional to the logarithm of the optical transmission coefficient of the developed film. A dark image on the negative is of higher density than a more transparent image.

Most films are affected by the physics of silver grain activation (which sets a minimum amount of light required to expose a single grain) and by the statistics of random grain activation by photons. The film requires a minimum amount of light before it begins to expose, and then responds by progressive darkening over a wide dynamic range of exposure until all of the grains are exposed, and the film achieves (after development) its maximum optical density.

Over the active dynamic range of most films, the density of the developed film is proportional to the logarithm of the total amount of light to which the film was exposed, so the transmission coefficient of the developed film is proportional to a power of the reciprocal of the brightness of the original exposure. The plot of the density of the film image against the log of the exposure is known as an H&D curve. This effect is due to the statistics of grain activation: as the film becomes progressively more exposed, each incident photon is less likely to impact a still-unexposed grain, yielding the logarithmic behavior. A simple, idealized statistical model yields the equation density = 1 – ( 1 – k) light, where light is proportional to the number of photons hitting a unit area of film, k is the probability of a single photon striking a grain (based on the size of the grains and how closely spaced they are), and density is the proportion of grains that have been hit by at least one photon. The relationship between density and log exposure is linear for photographic films except at the extreme ranges of maximum exposure (D-max) and minimum exposure (D-min) on an H&D curve, so the curve is characteristically S-shaped (as opposed to digital camera sensors which have a linear response through the effective exposure range). The sensitivity (i.e., the ISO speed) of a film can be affected by changing the length or temperature of development, which would move the H&D curve to the left or right (see figure).

If parts of the image are exposed heavily enough to approach the maximum density possible for a print film, then they will begin losing the ability to show tonal variations in the final print. Usually those areas will be considered overexposed and will appear as featureless white on the print. Some subject matter is tolerant of very heavy exposure. For example, sources of brilliant light, such as a light bulb or the sun, generally appear best as a featureless white on the print.

Likewise, if part of an image receives less than the beginning threshold level of exposure, which depends upon the film's sensitivity to light—or speed—the film there will have no appreciable image density, and will appear on the print as a featureless black. Some photographers use their knowledge of these limits to determine the optimum exposure for a photograph; for one example, see the Zone System. Most automatic cameras instead try to achieve a particular average density.

Color films can have many layers. The film base can have an antihalation layer applied to it or be dyed. This layer prevents light from reflecting from within the film, increasing image quality. If applied to the back of the film, it also serves to prevent scratching, as an antistatic measure due to its conductive carbon content, and as a lubricant to help transport the film through mechanisms. The antistatic property is necessary to prevent the film from getting fogged under low humidity, and mechanisms to avoid static are present in most if not all films. If applied on the back it is removed during film processing. If applied it may be on the back of the film base in triacetate film bases or in the front in PET film bases, below the emulsion stack. An anticurl layer and a separate antistatic layer may be present in thin high resolution films that have the antihalation layer below the emulsion. PET film bases are often dyed, specially because PET can serve as a light pipe; black and white film bases tend to have a higher level of dying applied to them. The film base needs to be transparent but with some density, perfectly flat, insensitive to light, chemically stable, resistant to tearing and strong enough to be handled manually and by camera mechanisms and film processing equipment, while being chemically resistant to moisture and the chemicals used during processing without losing strength, flexibility or changing in size.

The subbing layer is essentially an adhesive that allows the subsequent layers to stick to the film base. The film base was initially made of highly flammable cellulose nitrate, which was replaced by cellulose acetate films, often cellulose triacetate film (safety film), which in turn was replaced in many films (such as all print films, most duplication films and some other specialty films) by a PET (polyethylene terephthalate) plastic film base. Films with a triacetate base can suffer from vinegar syndrome, a decomposition process accelerated by warm and humid conditions, that releases acetic acid which is the characteristic component of vinegar, imparting the film a strong vinegar smell and possibly even damaging surrounding metal and films. Films are usually spliced using a special adhesive tape; those with PET layers can be ultrasonically spliced or their ends melted and then spliced.

The emulsion layers of films are made by dissolving pure silver in nitric acid to form silver nitrate crystals, which are mixed with other chemicals to form silver halide grains, which are then suspended in gelatin and applied to the film base. The size and hence the light sensitivity of these grains determines the speed of the film; since films contain real silver (as silver halide), faster films with larger crystals are more expensive and potentially subject to variations in the price of silver metal. Also, faster films have more grain, since the grains (crystals) are larger. Each crystal is often 0.2 to 2 microns in size; in color films, the dye clouds that form around the silver halide crystals are often 25 microns across. The crystals can be shaped as cubes, flat rectangles, tetradecadedra, flat hexagons or be flat and resemble a triangle with or without clipped edges; this type of crystal is known as a T-grain crystal. Films using T-grains are more sensitive to light without using more silver halide since they increase the surface area exposed to light by making the crystals flatter and larger in footprint instead of simply increasing their volume.

The exact silver halide used is either silver bromide or silver bromochloroiodide, or a combination of silver bromide, chloride and iodide.

In color films, each emulsion layer has a different color dye forming coupler: in the blue sensitive layer, the coupler forms a yellow dye; in the green sensitive layer the coupler forms a magenta dye, and in the red sensitive layer the coupler forms a cyan dye. Color films often have an UV blocking layer. Each emulsion layer in a color film may itself have three layers: a slow, medium and fast layer, to allow the film to capture higher contrast images. The color dye couplers are inside oil droplets dispersed in the emulsion around silver halide crystals, forming a silver halide grain. Here the oil droplets act as a surfactant, also protecting the couplers from chemical reactions with the silver halide and from the surrounding gelatin. During development, oxidized developer diffuses into the oil droplets and combines with the dye couplers to form dye clouds; the dye clouds only form around unexposed silver halide crystals. The fixer then removes the silver halide crystals leaving only the dye clouds: this means that developed color films may not contain silver while undeveloped films do contain silver; this also means that the fixer can start to contain silver which can then be removed through electrolysis. Color films also contain light filters to filter out certain colors as the light passes through the film: often there is a blue light filter between the blue and green sensitive layers and a yellow filter before the red sensitive layer; in this way each layer is made sensitive to only a certain color of light.

The color couplers may be colorless and be chromogenic or be colored. Colored couplers are used to improve the color reproduction of film. The first coupler which is used in the blue layer remains colorless to allow all light to pass through, but the coupler used in the green layer is colored yellow, and the coupler used in the red layer is light pink. Yellow was chosen to block any remaining blue light from exposing the underlying green and red layers (since yellow can be made from green and red). Each layer should only be sensitive to a single color of light and allow all others to pass through. Because of these colored couplers, the developed film appears orange. Colored couplers mean that corrections through color filters need to be applied to the image before printing. Printing can be carried out by using an optical enlarger, or by scanning the image, correcting it using software and printing it using a digital printer.

Kodachrome films have no couplers; the dyes are instead formed by a long sequence of steps, limiting adoption among smaller film processing companies.

Black and white films are very simple by comparison, only consisting of silver halide crystals suspended in a gelatin emulsion which sits on a film base with an antihalation back.

Many films contain a top supercoat layer to protect the emulsion layers from damage. Some manufacturers manufacture their films with daylight, tungsten (named after the tungsten filament of incandescent and halogen lamps) or fluorescent lighting in mind, recommending the use of lens filters, light meters and test shots in some situations to maintain color balance, or by recommending the division of the ISO value of the film by the distance of the subject from the camera to get an appropriate f-number value to be set in the lens.

Examples of Color films are Kodachrome, often processed using the K-14 process, Kodacolor, Ektachrome, which is often processed using the E-6 process and Fujifilm Superia, which is processed using the C-41 process. The chemicals and the color dye couplers on the film may vary depending on the process used to develop the film.

Film speed

A roll of 400 speed Kodak 35 mm film.

Film speed describes a film's threshold sensitivity to light. The international standard for rating film speed is the ISO#ISO scale, which combines both the ASA speed and the DIN speed in the format ASA/DIN. Using ISO convention film with an ASA speed of 400 would be labeled 400/27°. A fourth naming standard is GOST, developed by the Russian standards authority. See the film speed article for a table of conversions between ASA, DIN, and GOST film speeds.

Common film speeds include ISO 25, 50, 64, 100, 160, 200, 400, 800, 1600, 3200, and 6400. Consumer print films are usually in the ISO 100 to ISO 800 range. Some films, like Kodak's Technical Pan, are not ISO rated and therefore careful examination of the film's properties must be made by the photographer before exposure and development. ISO 25 film is very "slow", as it requires much more exposure to produce a usable image than "fast" ISO 800 film. Films of ISO 800 and greater are thus better suited to low-light situations and action shots (where the short exposure time limits the total light received). The benefit of slower film is that it usually has finer grain and better color rendition than fast film. Professional photographers of static subjects such as portraits or landscapes usually seek these qualities, and therefore require a tripod to stabilize the camera for a longer exposure. A professional photographing subjects such as rapidly moving sports or in low-light conditions will inevitably choose a faster film.

A film with a particular ISO rating can be push-processed, or "pushed", to behave like a film with a higher ISO, by developing for a longer amount of time or at a higher temperature than usual. More rarely, a film can be "pulled" to behave like a "slower" film. Pushing generally coarsens grain and increases contrast, reducing dynamic range, to the detriment of overall quality. Nevertheless, it can be a useful tradeoff in difficult shooting environments, if the alternative is no usable shot at all.

Special films

Instant photography, as popularized by Polaroid, uses a special type of camera and film that automates and integrates development, without the need of further equipment or chemicals. This process is carried out immediately after exposure, as opposed to regular film, which is developed afterwards and requires additional chemicals. See instant film.

Films can be made to record non-visible ultraviolet (UV) and infrared (IR) radiation. These films generally require special equipment; for example, most photographic lenses are made of glass and will therefore filter out most ultraviolet light. Instead, expensive lenses made of quartz must be used. Infrared films may be shot in standard cameras using an infrared band- or long-pass filters, although the infrared focal point must be compensated for.

Exposure and focusing are difficult when using UV or IR film with a camera and lens designed for visible light. The ISO standard for film speed only applies to visible light, so visual-spectrum light meters are nearly useless. Film manufacturers can supply suggested equivalent film speeds under different conditions, and recommend heavy bracketing (e.g., "with a certain filter, assume ISO 25 under daylight and ISO 64 under tungsten lighting"). This allows a light meter to be used to estimate an exposure. The focal point for IR is slightly farther away from the camera than visible light, and UV slightly closer; this must be compensated for when focusing. Apochromatic lenses are sometimes recommended due to their improved focusing across the spectrum.

Film optimized for detecting X-ray radiation is commonly used for medical radiography and industrial radiography by placing the subject between the film and a source of X-rays or gamma rays, without a lens, as if a translucent object were imaged by being placed between a light source and standard film. Unlike other types of film, X-ray film has a sensitive emulsion on both sides of the carrier material. This reduces the X-ray exposure for an acceptable image – a desirable feature in medical radiography. The film is usually placed in close contact with phosphor screen(s) and/or thin lead-foil screen(s), the combination having a higher sensitivity to X-rays. Because film is sensitive to x-rays, its contents may be wiped by airport baggage scanners if the film has a speed higher than 800 ISO. This property is exploited in Film badge dosimeters.

Film optimized for detecting X-rays and gamma rays is sometimes used for radiation dosimetry.

Film has a number of disadvantages as a scientific detector: it is difficult to calibrate for photometry, it is not re-usable, it requires careful handling (including temperature and humidity control) for best calibration, and the film must physically be returned to the laboratory and processed. Against this, photographic film can be made with a higher spatial resolution than any other type of imaging detector, and, because of its logarithmic response to light, has a wider dynamic range than most digital detectors. For example, Agfa 10E56 holographic film has a resolution of over 4,000 lines/mm—equivalent to a pixel size of 0.125 micrometers—and an active dynamic range of over five orders of magnitude in brightness, compared to typical scientific CCDs that might have pixels of about 10 micrometers and a dynamic range of 3–4 orders of magnitude.

Special films are used for the long exposures required by astrophotography.

Encoding of metadata

Some film cameras have the ability to read metadata from the film canister or encode metadata on film negatives.

Negative imprinting

Negative imprinting is a feature of some film cameras, in which the date, shutter speed and aperture setting are recorded on the negative directly as the film is exposed. The first known version of this process was patented in the United States in 1975, using half-silvered mirrors to direct the readout of a digital clock and mix it with the light rays coming through the main camera lens. Modern SLR cameras use an imprinter fixed to the back of the camera on the film backing plate. It uses a small LED display for illumination and optics to focus the light onto a specific part of the film. The LED display is exposed on the negative at the same time the picture is taken. Digital cameras can often encode all the information in the image file itself. The Exif format is the most commonly used format.

DX codes

135 Film Cartridge with DX barcode (top) and DX CAS code on the black and white grid below the barcode. The CAS code shows the ISO, number of exposures, exposure latitude (+3/−1 for print film).
 
DX film edge barcode

In the 1980s, Kodak developed DX Encoding (from Digital indeX), or DX coding, a feature that was eventually adapted by all camera and film manufacturers. DX encoding provides information on both the film cassette and on the film regarding the type of film, number of exposures, speed (ISO/ASA rating) of the film. It consists of three types of identification. First is a barcode near the film opening of the cassette, identifying the manufacturer, film type and processing method (see image below left). This is used by photofinishing equipment during film processing. The second part is a barcode on the edge of the film (see image below right), used also during processing, which indicates the image film type, manufacturer, frame number and synchronizes the position of the frame. The third part of DX coding, known as the DX Camera Auto Sensing (CAS) code, consists of a series of 12 metal contacts on the film cassette, which beginning with cameras manufactured after 1985 could detect the type of film, number of exposures and ISO of the film, and use that information to automatically adjust the camera settings for the speed of the film.

Common sizes of film

Film Designation Film width (mm) Image size (mm) Number of images Reasons
110 16 13 × 17 12/20 Single perforations, cartridge loaded
APS/IX240 24 17 × 30 15/25/40

e.g., Kodak "Advantix", different aspect ratios possible, data recorded on magnetic strip, processed film remains in cartridge

126 35 26 × 26 12/20/24 Single perforations, cartridge loaded, e.g., Kodak Instamatic camera
135 35 24 × 36 (1.0 x 1.5 in.) 12–36 Double perforations, cassette loaded, "35 mm film"
127 46 40 x 40 (also 40 x 30 or 60) 8-16 Unperforated, rolled in backing paper.
120 62 45 × 60 16 or 15 Unperforated, rolled in backing paper. For medium format photography


60 × 60 12


60 × 70 10


60 × 90 8
220 62 45 × 60 32 or 31 Same as 120, but rolled with no backing paper, allowing for double the number of images. Unperforated film with leader and trailer.


60 × 60 24


60 × 70 20


60 × 90 16
Sheet film 2 ¼ x 3 ¼ to 20 x 24 in.
1 Individual sheets of film, notched in corner for identification, for large format photography
Disc film
10 × 8 mm 15
Motion picture films
8 mm, 16 mm, 35 mm and 70 mm
Double perforations, cassette loaded

History of film

The earliest practical photographic process was the daguerreotype; it was introduced in 1839 and did not use film. The light-sensitive chemicals were formed on the surface of a silver-plated copper sheet. The calotype process produced paper negatives. Beginning in the 1850s, thin glass plates coated with photographic emulsion became the standard material for use in the camera. Although fragile and relatively heavy, the glass used for photographic plates was of better optical quality than early transparent plastics and was, at first, less expensive. Glass plates continued to be used long after the introduction of film, and were used for astrophotography and electron micrography until the early 2000s, when they were supplanted by digital recording methods. Ilford continues to manufacture glass plates for special scientific applications.

The first flexible photographic roll film was sold by George Eastman in 1885, but this original "film" was actually a coating on a paper base. As part of the processing, the image-bearing layer was stripped from the paper and attached to a sheet of hardened clear gelatin. The first transparent plastic roll film followed in 1889. It was made from highly flammable cellulose nitrate film.

Although cellulose acetate or "safety film" had been introduced by Kodak in 1908, at first it found only a few special applications as an alternative to the hazardous nitrate film, which had the advantages of being considerably tougher, slightly more transparent, and cheaper. The changeover was completed for X-ray films in 1933, but although safety film was always used for 16 mm and 8 mm home movies, nitrate film remained standard for theatrical 35 mm films until it was finally discontinued in 1951.

Hurter and Driffield began pioneering work on the light sensitivity of photographic emulsions in 1876. Their work enabled the first quantitative measure of film speed to be devised. They developed H&D curves, which are specific for each film and paper. These curves plot the photographic density against the log of the exposure, to determine sensitivity or speed of the emulsion and enabling correct exposure.

Spectral sensitivity

Early photographic plates and films were usefully sensitive only to blue, violet and ultraviolet light. As a result, the relative tonal values in a scene registered roughly as they would appear if viewed through a piece of deep blue glass. Blue skies with interesting cloud formations photographed as a white blank. Any detail visible in masses of green foliage was due mainly to the colorless surface gloss. Bright yellows and reds appeared nearly black. Most skin tones came out unnaturally dark, and uneven or freckled complexions were exaggerated. Photographers sometimes compensated by adding in skies from separate negatives that had been exposed and processed to optimize the visibility of the clouds, by manually retouching their negatives to adjust problematic tonal values, and by heavily powdering the faces of their portrait sitters.

In 1873, Hermann Wilhelm Vogel discovered that the spectral sensitivity could be extended to green and yellow light by adding very small quantities of certain dyes to the emulsion. The instability of early sensitizing dyes and their tendency to rapidly cause fogging initially confined their use to the laboratory, but in 1883 the first commercially dye-sensitized plates appeared on the market. These early products, described as isochromatic or orthochromatic depending on the manufacturer, made possible a more accurate rendering of colored subject matter into a black-and-white image. Because they were still disproportionately sensitive to blue, the use of a yellow filter and a consequently longer exposure time were required to take full advantage of their extended sensitivity.

In 1894, the Lumière Brothers introduced their Lumière Panchromatic plate, which was made sensitive, although very unequally, to all colors including red. New and improved sensitizing dyes were developed, and in 1902 the much more evenly color-sensitive Perchromo panchromatic plate was being sold by the German manufacturer Perutz. The commercial availability of highly panchromatic black-and-white emulsions also accelerated the progress of practical color photography, which requires good sensitivity to all the colors of the spectrum for the red, green and blue channels of color information to all be captured with reasonable exposure times.

However, all of these were glass-based plate products. Panchromatic emulsions on a film base were not commercially available until the 1910s and did not come into general use until much later. Many photographers who did their own darkroom work preferred to go without the seeming luxury of sensitivity to red—a rare color in nature and uncommon even in man-made objects—rather than be forced to abandon the traditional red darkroom safelight and process their exposed film in complete darkness. Kodak's popular Verichrome black-and-white snapshot film, introduced in 1931, remained a red-insensitive orthochromatic product until 1956, when it was replaced by Verichrome Pan. Amateur darkroom enthusiasts then had to handle the undeveloped film by the sense of touch alone.

Introduction to color

Experiments with color photography began almost as early as photography itself, but the three-color principle underlying all practical processes was not set forth until 1855, not demonstrated until 1861, and not generally accepted as "real" color photography until it had become an undeniable commercial reality in the early 20th century. Although color photographs of good quality were being made by the 1890s, they required special equipment, separate and long exposures through three color filters, complex printing or display procedures, and highly specialized skills, so they were then exceedingly rare.

The first practical and commercially successful color "film" was the Lumière Autochrome, a glass plate product introduced in 1907. It was expensive and not sensitive enough for hand-held "snapshot" use. Film-based versions were introduced in the early 1930s and the sensitivity was later improved. These were "mosaic screen" additive color products, which used a simple layer of black-and-white emulsion in combination with a layer of microscopically small color filter elements. The resulting transparencies or "slides" were very dark because the color filter mosaic layer absorbed most of the light passing through. The last films of this type were discontinued in the 1950s, but Polachrome "instant" slide film, introduced in 1983, temporarily revived the technology.

"Color film" in the modern sense of a subtractive color product with a multi-layered emulsion was born with the introduction of Kodachrome for home movies in 1935 and as lengths of 35 mm film for still cameras in 1936; however, it required a complex development process, with multiple dyeing steps as each color layer was processed separately. 1936 also saw the launch of Agfa Color Neu, the first subtractive three-color reversal film for movie and still camera use to incorporate color dye couplers, which could be processed at the same time by a single color developer. The film had some 278 patents. The incorporation of color couplers formed the basis of subsequent color film design, with the Agfa process initially adopted by Ferrania, Fuji and Konica and lasting until the late 70s/early 1980s in the West and 1990s in Eastern Europe. The process used dye-forming chemicals that terminated with sulfonic acid groups and had to be coated one layer at a time. It was a further innovation by Kodak, using dye-forming chemicals which terminated in 'fatty' tails which permitted multiple layers to coated at the same time in a single pass, reducing production time and cost that later became universally adopted along with the Kodak C-41 process.

Despite greater availability of color film after WWII during the next several decades, it remained much more expensive than black-and-white and required much more light, factors which combined the greater cost of processing and printing delayed its widespread adoption. Decreasing cost, increasing sensitivity and standardized processing gradually overcame these impediments. By the 1970s, color film predominated in the consumer market, while the use of black-and-white film was increasingly confined to photojournalism and fine art photography.

Effect on lens and equipment design

Photographic lenses and equipment are designed around the film to be used. Although the earliest photographic materials were sensitive only to the blue-violet end of the spectrum, partially color-corrected achromatic lenses were normally used, so that when the photographer brought the visually brightest yellow rays to a sharp focus, the visually dimmest but photographically most active violet rays would be correctly focused, too. The introduction of orthochromatic emulsions required the whole range of colors from yellow to blue to be brought to an adequate focus. Most plates and films described as orthochromatic or isochromatic were practically insensitive to red, so the correct focus of red light was unimportant; a red window could be used to view the frame numbers on the paper backing of roll film, as any red light which leaked around the backing would not fog the film; and red lighting could be used in darkrooms. With the introduction of panchromatic film, the whole visible spectrum needed to be brought to an acceptably sharp focus. In all cases a color cast in the lens glass or faint colored reflections in the image were of no consequence as they would merely change the contrast a little. This was no longer acceptable when using color film. More highly corrected lenses for newer emulsions could be used with older emulsion types, but the converse was not true.

The progression of lens design for later emulsions is of practical importance when considering the use of old lenses, still often used on large-format equipment; a lens designed for orthochromatic film may have visible defects with a color emulsion; a lens for panchromatic film will be better but not as good as later designs.

The filters used were different for the different film types.

Decline

Film remained the dominant form of photography until the early 21st century, when advances in digital photography drew consumers to digital formats. The first consumer electronic camera, the Sony Mavica was released in 1981, the first digital camera, the Fuji DS-X released in 1989, coupled with advances in software such as Adobe Photoshop which was released in 1989, improvements in consumer level digital color printers and increasingly widespread computers in households during the late 20th century facilitated uptake of digital photography by consumers.

The initial take up of digital cameras in the 1990s was slow due to their high cost and relatively low resolution of the images (compared to 35mm film), but began to make inroads among consumers in the point and shoot market and in professional applications such as sports photography where speed of results including the ability to upload pictures direct from stadia was more critical for newspaper deadlines than resolution. A key difference compared to film was that early digital cameras were soon obsolete, forcing users into a frequent cycle of replacement until the technology began to mature, whereas previously people might have only owned one or two film cameras in their lifetime. Consequently photographers demanding higher quality in sectors such as weddings, portraiture and fashion where medium format film predominated were the last to switch once resolution began to reach acceptable levels with the advent of 'full frame' sensors, 'digital backs' and medium format digital cameras.

Film camera sales based on CIPA figures peaked in 1998, before declining rapidly after 2000 to reach almost zero by the end of 2005 as consumers switched en masse to digital cameras (sales of which subsequently peaked in 2010). These changes foretold a similar reduction in film sales. Figures for Fujifilm show global film sales, having grown 30% in the preceding five years, peaked around the year 2000. Film sales then began a period of year-on-year falling sales, of increasing magnitude from 2003 to 2008, reaching 30% per annum before slowing. By 2011, sales were less than 10% of the peak volumes. Similar patterns were experienced by other manufacturers, varying by market exposure, with global film sales estimated at 900 million rolls in 1999 declining to only 5 million rolls by 2009. This period wreaked havoc on the film manufacturing industry and its supply chain optimised for high production volumes, plummeting sales saw firms fighting for survival. Agfa-Gevaert's decision to sell off its consumer facing arm (Agfaphoto) in 2004, was followed by a series of bankruptcies of established film manufacturers: Ilford Imaging UK in 2004, Agfaphoto in 2005, Forte in 2007, Foton in 2007, Polaroid in 2001 and 2008, Ferrania in 2009, and Eastman Kodak in 2012. The latter only surviving after massive downsizing whilst Ilford was rescued by a management buyout. Konica-Minolta closed its film manufacturing business and exited the photographic market entirely in 2006, selling its camera patents to Sony, and Fujifilm successfully moved to rapidly diversify into other markets. The impact of this paradigm shift in technology subsequently rippled though the downstream photo processing and finishing businesses.

Although modern photography is dominated by digital users, film continues to be used by enthusiasts. Film remains the preference of some photographers because of its distinctive "look".

Renewed interest in recent years

Despite the fact that digital cameras are by far the most commonly-used photographic tool and that the selection of available photographic films is much smaller than it once was, sales of photographic film have been on a steady upward trend. Kodak (which was under bankruptcy protection from January 2012 to September 2013) and other companies have noticed this upward trend: Dennis Olbrich, President of the Imaging Paper, Photo Chemicals and Film division at Kodak Alaris, has stated that sales of their photographic films have been growing over the past 3 or 4 years. UK-based Ilford have confirmed this trend and conducted extensive research on this subject matter, their research showing that 60% of current film users had only started using film in the past five years and that 30% of current film users were under 35 years old. Annual film sales which were estimated to reach a low point of 5m rolls in 2009 have since doubled to around 10m rolls in 2019. A key challenge for the industry is that production relies on the remaining coating facilities that were built for the peak years of demand, but as demand has grown capacity constraints in some of the other process steps which have been downscaled, such as converting film, have caused production bottlenecks for companies such as Kodak.

In 2013 Ferrania, an Italy-based film manufacturer which ceased production of photographic films between the years 2009 and 2010, was acquired by the new Film Ferrania S.R.L taking over a small part of the old company's manufacturing facilities using its former research facility, and re-employed some workers who had been laid off 3 years earlier when the company stopped production of film. In November of the same year, the company started a crowdfunding campaign with the goal of raising $250,000 to buy tooling and machines from the old factory, with the intention of putting some of the films that had been discontinued back into production, the campaign succeeded and in October 2014 was ended with over $320,000 being raised. In February 2017, Film Ferrania unveiled their "P30" 80 ASA, Panchromatic black and white film, in 35mm format.

Kodak announced on January 5, 2017, that Ektachrome, one of Kodak's most well known transparency films that had been discontinued between 2012 and 2013, would be reformulated and manufactured once again, in 35 mm still and Super 8 motion picture film formats. Following the success of the release, Kodak expanded Ektachrome's format availability by also releasing the film in 120 and 4x5 formats.

Japan-based Fujifilm's instant film "Instax" cameras and paper have also proven to be very successful, and have replaced traditional photographic films as Fujifilm's main film products, while they continue to offer traditional photographic films in various formats and types.[59]

 

Rydberg atom

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Rydberg_atom Figure 1: Electron orbi...