Why Megapixel Count Does Not Measure the True Resolution of a Digital Microscopy System
For some time, we have seen the industry sell cameras under the false argument that “more megapixels equals more resolution.” This document, grounded in undisputed physics, dismantles that myth. The true resolution of any microscopy system is determined exclusively by the numerical aperture of the objective and the wavelength of light. A 12‑megapixel sensor coupled to a conventional 100× objective transmits, at best, less than 2 megapixels of actual optical information. The rest are empty pixels. This document quantifies that limit for all standard objectives on the market and lays the groundwork for a finally honest evaluation of capture systems.
When a manufacturer of microscopy cameras announces a new model, the number printed largest in the brochure is almost always the megapixel count. Twenty. Forty‑five. One hundred. The implicit message is always the same: more pixels, more detail, better science. It is a persuasive argument. And it is fundamentally wrong.
In optical microscopy, resolution is not determined by the sensor. It is determined by the physics of light passing through a lens. That physics—diffraction—imposes an absolute, unbreakable limit that no photodiode density can overcome. Beyond that limit, adding pixels does not add information; it adds redundancy. And redundancy has real costs: higher relative noise, larger data volumes, more expensive hardware and—most importantly—a false sense of quality that misleads the user.
This document is not intended as an opinion piece. It is a quantitative derivation based on scalar diffraction theory, the optical transfer function and the Nyquist–Shannon sampling theorem—three pillars of physics that have stood unchallenged for more than a century. The results are numerical, reproducible and publicly verifiable. The goal is simple: any microscopist should be able to know, before buying or evaluating a camera, how many megapixels of real information they can expect from a given objective–sensor combination.
When light passes through the circular aperture of a microscope objective, it does not behave like a geometric ray converging to a mathematical point. It behaves like a wave, and waves diffract when they pass through finite apertures. As a result, the image of any point in the specimen is not a point on the sensor; it is a blurry disc surrounded by concentric rings of diminishing brightness. This is called the Airy disc, and its size depends only on two things: the wavelength of light and the aperture of the lens. This is not a manufacturing defect. It is not something you fix with better glass or a bigger budget. It is a direct consequence of Maxwell’s equations, which describe how electromagnetic radiation propagates. There is no way around it with conventional optics.
The practical consequence follows directly: two points in the specimen that are closer together than a certain minimum distance will generate overlapping Airy discs on the sensor, and the detector will be unable to distinguish them as two separate points. That limit is called optical resolution, and it is the fundamental parameter of any microscope.
The minimum lateral resolution of a microscope—the smallest distance between two points that can still be distinguished—can be calculated using a formula Ernst Abbe derived in 1873 and which remains valid today:
d = 0.61 × λ / NA
where λ is the wavelength of light (typically 550 nm for green light, which is the worst case in the visible) and NA is the Numerical Aperture of the objective, defined as NA = n × sin(α), with n the refractive index of the medium between objective and specimen, and α the half‑angle of the captured light cone.
This result determines the true resolution of the system. Magnification does not appear in this formula. A 10× objective with NA 0.25 resolves exactly the same minimum detail as a 40× objective with NA 0.25: 1.34 μm. Magnification only determines how large that detail appears on the sensor, not whether it can be distinguished.
The most rigorous way to characterise a system’s ability to transmit information is the Modulation Transfer Function (MTF). The MTF describes how much contrast the system preserves for each level of spatial detail, expressed in spatial frequencies (line pairs per millimetre).
For a perfect, diffraction‑limited lens, the MTF has a very characteristic shape: it starts at 100% contrast for coarse structures and falls progressively until it reaches zero at a well‑defined frequency called the cut‑off frequency. Above this frequency, contrast is strictly zero. Not degraded, not merely weak: absolute zero.
The cut‑off frequency is given directly by:
f_cutoff = 2 × NA / λ
Above f_cutoff, the lens transmits no information about the specimen at all. The optics are an absolute spatial low‑pass filter. This is why it is completely pointless to put a sensor behind the system that can sample frequencies higher than f_cutoff: there is nothing there to sample.
Figure 1 — Contrast (MTF) fall‑off as a function of spatial frequency A perfect diffraction‑limited microscope objective transfers decreasing contrast from 100% (coarse structures) down to 0% at the cut‑off frequency. Beyond that point, no further detail passes through the optical system, regardless of how many pixels the sensor has.
Figure 1: Qualitative representation of the MTF fall‑off for a diffraction‑limited microscope objective. Frequency is expressed as a percentage of the cut‑off frequency fc = 2·NA/λ. Values derived from the analytical formula MTF(ν) = (2/π)(φ − cos φ · sin φ), where φ = arccos(λν/2NA).
The Nyquist–Shannon sampling theorem states that to faithfully capture a continuous, band‑limited signal, the sampling frequency must be at least twice the highest frequency present in the signal. In imaging, this translates to the requirement that the pixel size in the object plane must be equal to or smaller than half of the finest detail that the optics can resolve:
p_object ≤ d_min / 2 = 0.61 × λ / (2 × NA)
Here p_object is the pixel size projected into the specimen plane, calculated as the physical pixel size of the sensor divided by the total magnification of the system (objective × relay).
If the pixel is larger than this limit, fine details that the optics could resolve are lost: the sensor lacks the resolution to record them. If the pixel is smaller—an oversampling situation—the sensor has more resolving capacity than the optical limit, and the extra pixels record the same information multiple times without adding new content.
The theoretical Nyquist criterion requires sampling at 2× the maximum frequency. In practice, microscopists and imaging system designers work with a more conservative margin of 2.3 to 3 samples per smallest resolvable detail. The reasons are physical, not arbitrary:
The MTF falls gradually toward zero at f_cutoff. Near the limit, contrast may be below 5–10%, making it vulnerable to shot noise. Sampling exactly at 2× under those conditions means low‑contrast details are lost in the noise. Moderate oversampling (2.3–3×) improves the ability to recover them.
Colour sensors use Bayer filter arrays, where each pixel captures only one of three colour channels. The interpolation needed to reconstruct full colour reduces effective resolution by roughly 20–30% compared with a monochrome sensor.
Real objectives, even very high‑quality ones, have residual aberrations that broaden the PSF and depress the MTF at high frequencies, requiring some oversampling to extract all available information.
However—this is critical—once the sampling factor goes beyond ~3×, additional pixels no longer contribute to any improvement. They do not resolve more detail, do not increase contrast and do not yield a net reduction in noise. They just record the same content already captured by neighbouring pixels.
The proper way to quantify the information‑carrying capacity of a microscopy system is the Space–Bandwidth Product (SBP). SBP is the maximum number of independent spatial degrees of freedom—informationally distinct points—that the system can transmit from specimen to sensor.
For a rectangular sensor with active area Asens coupled to an optical system with given NA, the maximum number of megapixels of real optical information is:
MP_opt = A_FOV(μm²) × (4·NA/λ)² × 10⁻⁶
where A_FOV is the actual field of view of the objective–relay combination on the specimen. This is the formula used in the tables in Section 5 to calculate the true optical megapixels of each configuration.
One useful reference point: for a standard 1/2.3" format sensor (active area 29.6 mm²) and λ = 550 nm, the hardware constant reduces to 1566.88, and the operational equation becomes simply:
MP_opt = 1566.88 × (NA / M_total)²
where M_total = M_objective × M_relay. This formula explains why, as magnification goes up, the real optical megapixels fall quadratically: doubling the magnification reduces the real megapixels by a factor of four.
To make this analysis concrete and applicable, we work with the characteristics of a high‑quality, stacked BSI CMOS sensor in 1/2.3" format, which is now common in mid‑ to high‑end, affordable microscopy cameras. This format is widely available in OEM industrial and scientific modules.
The specifications used are typical for high‑end stacked BSI sensors in this format:
This sensor has 12 physical megapixels. The question this document answers is how many of those 12 megapixels contain real optical information—as opposed to diffraction‑limited redundancy—depending on the objective used.
The short answer: for no standard microscope objective used with a conventional relay are all 12 MP filled with real information. In most practical configurations, true optical information ranges from about 0.1 to 8 megapixels, well below the sensor’s physical capacity.
When a camera is mounted on the trinocular port of a microscope, the image does not go straight to the sensor. It first passes through relay optics—also called a C‑mount adapter or coupler—whose function is to scale the microscope image to the sensor size. This element is critical and routinely ignored in commercial specs.
The relay determines how much specimen area the sensor covers and, consequently, how many megapixels of real optical information are captured. The relationship is quadratic: a 0.5× relay instead of 1× quadruples the specimen area covered and therefore quadruples the real information megapixels. In most practical cases, this effect far outweighs any sensor upgrade.
Field number note: modern microscope tubes project an image circle 22–26.5 mm in diameter. The 1/2.3" sensor has a 7.86 mm diagonal. With a 1× relay, the sensor covers only ~36% of the usable image circle diameter, wasting most of the information the objective delivers. With a 0.5× relay it covers ~71%, and with 0.35× it captures essentially 100% of the available field.
The following table calculates, for each standard objective on the market in its achromatic and plan apochromatic variants, the number of true optical megapixels the system can capture with a 0.5× relay (primary configuration) and a 0.35× relay (maximum‑field configuration).
Calculation parameters: λ = 550 nm, 12 MP sensor with 1.55 μm pixels (1/2.3" format), SBP formula under theoretical Nyquist. Optical resolution uses the Rayleigh criterion (d = 0.61·λ/NA).
The 100× oil‑immersion objective is the workhorse for high‑resolution pathology and cytology. With NA 1.25 (standard achromat) and a 0.5× relay, the system can transmit about 0.98 megapixels of real information. With NA 1.45 (high‑end plan apochromat) and the same relay, this rises to 1.32 megapixels.
A 12‑megapixel sensor on such a system will generate a 12‑million‑pixel image in which between 11 and 11.5 million pixels are smooth interpolations of the diffraction pattern. They do not resolve any additional specimen detail or reveal hidden structures; they simply make the file larger.
This is not a sensor flaw; it is physics. A 100‑megapixel sensor in the same system would contain exactly the same real optical information: still 1.32 megapixels. The remaining ~98.7 megapixels are pure redundancy.
The picture is not entirely bleak for high‑density sensors. With low‑magnification, high‑NA objectives—especially the 4× Plan Apochromat (NA 0.20) and the 10× Plan Apochromat (NA 0.45)—the true optical megapixels with a 0.5× relay exceed the sensor’s 12‑megapixel capacity. In these cases, the sensor is the bottleneck: the optics can deliver more information than the sensor can record.
This is the only scenario where it makes technical sense to invest in a higher‑density sensor.For all other situations—most clinical and research applications at 20× magnification and above—the sensor is already dense enough, and the limiting factor is always the optics.
The quadratic relationship between magnification and true megapixels is perhaps the most counterintuitive finding here. Doubling magnification divides real optical megapixels by four. Multiplying magnification by ten divides them by one hundred.
The reason is simple: higher magnification yields a smaller field of view on the specimen. A smaller field contains less total spatial information, regardless of how well the objective resolves it. It is like photographing a postage stamp with a 100‑megapixel camera: the camera can sample many pixels on the stamp, but the stamp does not contain 100 megapixels’ worth of detail.
Beyond information redundancy, oversampling has practical downsides that commercial literature rarely mentions:microscopes.unitronusa+1
The smaller photodiodes needed to increase pixel density have lower full‑well capacity, reducing per‑pixel dynamic range and making the sensor more prone to saturation.
Read noise, even if low in absolute terms, contributes more significantly to overall SNR when pixels collect fewer photons per unit area.
The large data volumes generated by oversized images increase storage costs, network load and processing time without proportional diagnostic benefit.
AI‑based image analysis systems work best with images that have high signal‑to‑noise ratio, not simply high nominal resolution with marginal SNR in the finest structures.
The true resolution of a digital microscopy system is a product of the optics, not the sensor. This is not a design opinion or a technological preference; it follows directly from diffraction physics and the sampling theorem, two theoretical frameworks that have been experimentally validated for more than a century.
The megapixel count of a microscopy camera is not a measure of resolution. It is a measure of the sensor’s sampling capacity, which only has meaning when compared with the capacity of the optical system it is attached to. Without that comparison, megapixels tell you nothing about image quality.
For the vast majority of clinical and research microscopy use cases—magnifications of 20× and above—a 12‑megapixel, 1/2.3" sensor with a 0.5× relay already matches or exceeds the optical information capacity of the objective. Increasing sensor density does not improve image quality under those conditions.
Choosing the relay optics is at least as important as choosing the sensor if you want to maximise system performance. A 0.5× relay quadruples the true optical megapixels compared with a 1× relay, with no loss of optical resolution.
The only scenario where a higher‑density sensor delivers genuinely more value is with low‑magnification objectives (4×–10×) that have high NA (Plan Apochromats with NA > 0.20 at 4× and NA > 0.40 at 10×), where the system’s optical capacity exceeds the 12‑megapixel reference sensor.
Systems that advertise “resolution” based solely on sensor megapixel count, without specifying objective NA and relay factor, are providing an incomplete metric that does not allow users to compare real system quality. Honest evaluation of a digital microscopy system requires specifying: objective NA, relay factor and true optical megapixels as given by the SBP formula.
This document establishes a quantitative, publicly verifiable basis for such evaluation. The calculations are derived from first principles and can be independently reproduced by any laboratory with basic optics expertise. We invite you to do so.
[1] Abbe, E. (1873). Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung. Archiv für Mikroskopische Anatomie, 9(1), 413–418. [2] Rayleigh, Lord (1896). On the Theory of Optical Images, with Special Reference to the Microscope. Philosophical Magazine, 42(255), 167–195. [3] Born, M., & Wolf, E. (2013). Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (7ª ed.). Cambridge University Press. [4] Goodman, J. W. (2005). Introduction to Fourier Optics (3ª ed.). Roberts & Company Publishers. [5] Pawley, J. B. (Ed.) (2006). Handbook of Biological Confocal Microscopy (3ª ed.). Springer. Cap. 2: Fundamental limits in confocal microscopy. [6] Shannon, C. E. (1949). Communication in the Presence of Noise. Proceedings of the IRE, 37(1), 10–21. [7] Nyquist, H. (1928). Certain Topics in Telegraph Transmission Theory. Transactions of the American Institute of Electrical Engineers, 47(2), 617–644. [8] Zhang, Z., et al. (2023). Characterization of BSI CMOS Image Sensors for Scientific Microscopy Applications. Journal of Microscopy, 289(2), 112–124. [9] Lichtman, J. W., & Conchello, J. A. (2005). Fluorescence microscopy. Nature Methods, 2(12), 910–919.
© 2026 Microluma. This document may be freely reproduced for educational and scientific purposes, provided the source is cited. microluma.eu
