Red Light, Blue Light: Balancing LED Efficiency with Performance

The ability of LED technology to manipulate the lighting spectrum should be used sparingly and as a means of mimicking natural diurnal and seasonal changes to benefit plants, not marketing.

Light-emitting diode (LED) technology offers revolutionary improvements to indoor horticulturalists.

However, as with other emerging technologies, the higher initial investment is a hurdle that slows adoption despite a lower total cost of ownership.

Yet as the medical plants industry has evolved, growers’ pockets have deepened such that an initial cash outlay doesn’t preclude any equipment purchase.

Cultivators and investors now take the long view on profitability, and they’ve moved toward more efficient, multi-million-dollar facilities.

From this broader perspective, it’s clear LEDs offer a financial benefit.

LEDs dramatically increase efficiency, create minimal heat production, offer 50,000-hour service life, and require low maintenance, making them an obvious choice over traditional high intensity discharge (HID) technologies.

So why hasn’t LED technology been widely adopted by growers and greenhouses?

The answer lies in how the designers of the new grow lights manipulated the light spectrum,and how photometric unit conventions changed to favor these new products.

Within the grow community, LEDs have a dubious track record and questionable reputation.

Given the design of many products on the market, that reputation is largely deserved.

Grow plant industry groupthink holds that LED-produced light can’t match the all-important product quality yielded by traditional HID lighting.

Lackluster performance and a handful of bad anecdotes from first-generation LED products perpetuate these beliefs.

The reason for these problems was designers’ deviation from what worked, e.g. high pressure sodium (HPS) and metal halide (MH) spectrums, in favor of a supposedly ideal blue/red combination.

This new spectrum was intended to give plants precisely the light they needed and nothing more.

Omitting “unused” wavebands of light was thought to save energy.

For reasons we’ll discuss below, the existing system of light measurement didn’t favor the new spectral profile championed by this design philosophy.

These manufacturers imposed a change in the units of light measurement, and product comparisons became complicated.

The Blue/Red Spectrum Rationale

The new blue/red designs were well-intentioned and not without logic.

Two loosely-defined spectra of light — blue and red — drive much of the action of photosynthesis.

This much is true. Dr. Keith J. McCree, a pioneering researcher and founder of the McCree Curve, described these two spectra as peaking around 440 nm and 620 nm — blue and red, respectively.

A wide body of research has confirmed this data, though subsequent research has found the absorbance maxima to be slightly different.

The action of so-called “chlorophyll a” demonstrates a peak at 430 nm and 662 nm, while “chlorophyll b” is driven by 453 nm and 642 nm light.

The first generation of LED designers went wrong by forcefully reducing the plant growth spectrum to just these two segments of the photosynthetically active radiation (PAR).

Many light designers were once enthusiastic about the potential of blue/red LEDs too, but philosophies have evolved with experimentation.

Based on real-world experience, many now champion full-spectrum white lights.

Light-emitting diodes are often more electrically efficient at red and blue wavelengths, so it logically followed to reduce the output of horticultural lights to these two colors if plants preferred them as well.

By doing so, no resources were expended on “wasted” portions of the PAR spectrum.

After eons of sunlight, light designers thought they had discovered just what plants needed after all: pinkish-purple light.

By excluding the inefficient and electrically wasteful greens, yellows, and oranges, spectrum engineers thought they could grow better plants and do so more efficiently.

Much of this thinking stemmed from the McCree curve, the camel-backed spectral profile that shows plant metabolic response peaking at blue and red.

McCree established this illustrative tool in his 1970 study of dozens of plant species at Texas A&M University.

His work has served as the central tenet of spectrum science and the foundation for other research building upon his key observations.

But the McCree Curve is not a recommendation for an ideal horticultural lighting spectrum and, by McCree’s description, was orchestrated to provide a basis for the discussion of the definition of photosynthetically active radiation (PAR) — not the optimization of a growth spectrum.

It explored if plants responded to the various colors of light and what was — and was not — part of the PAR spectrum.

Accordingly, the experiment took a reductionist angle.

McCree’s team tested plant response to one wavelength of light at a time, isolated in 25 nm intervals between 400 nm and 700 nm.

A single cut leaf was placed in an isolation chamber and exposed to colored light while the carbon dioxide and oxygen levels were measured to deduce the metabolic response.

So, the exalted McCree Curve, while useful, isn’t a depiction of an ideal growth spectrum or a recommendation for light design.

It’s simply a set of isolated data points stitched together into a smooth line. It isn’t holistic.

When considered as a unified spectrum, it’s foreign to plants that are hardwired for sunlight.

A Paradigm Shift: Lumens vs. Micromoles Per Second

Before the introduction of LEDs, horticulturalists measured light output in lumens and lux, with lumens representing the rate at which a bulb produces light and lux referring to the rate at which light falls on a particular area.

In this era of light measurement, product comparisons were straightforward.

Growers understood the brightness of their lighting and fewer lighting spectrums were in use.

But the new technology of LEDs didn’t align with the prevailing frame of measurement, so the units were stated differently.

Micromoles per second (μmols/sec) became the standard for assessing the quantity of light, and the change in units created lots of confusion.

The reason for the change centered on the luminosity function built into the lumens/lux measurement scale. Lumens take into account the sensitivity of the human eye to particular wavelengths of light — namely those centered around 550 nm (green light).

A micromole, by contrast, is a quantity of photons. The lumen system’s weighting of the spectrum allows us to assess the brightness of a light as we experience it rather than as a PAR meter would experience it.

A PAR meter registers photons (measured in μmols/sec) within the PAR range of 400-700 nm.

Facebook
Twitter
LinkedIn
Pinterest

Get an instant quote from Kingrowlight