1. Introduction
Digital cameras are ubiquitous consumer electronics and are being explored to deliver extra capabilities beyond traditional photography and video. A new optical communication technique using cameras as receivers has been studied in the IEEE 802.15 SG7a within the framework of optical wireless communications and considered as a candidate of IEEE 802.15.7r1, which is called Optical Camera Communication (OCC). OCC has been investigated as one of the Visible Light Communication (VLC) schemes [
1]. OCC implemented within internet of things (IoT) environments provides multiple functionalities of vision, data communications, localization and motion detection (MD) [
2,
3] used in various IoT-based network applications including device-to-device communications [
4], mobile atto-cells [
5], vehicular communications [
6,
7,
8], and smart cities, offices, and homes (SCOH) [
9].
The majority of new generation smart devices have built-in Complementary Metal-Oxide-Semiconductor (CMOS) image sensors, providing the ability to capture photos and videos [
10,
11]. The strategy behind using a CMOS camera for OCC is that the image sensor performs an acquisition mechanism known as Rolling Shutter (RS), in which it sequentially integrates light on rows of pixels [
12] starting the scanning of each line with a delay with respect to the previous one. In other words, the timings of the line-wise scanning make the imaging sensor to capture different windows of time of the optical signal coming from a Light Emitting Diode (LED) transmitter (
). Then, each line of the image can hold a distinct portion of information.
The use of LEDs available in SCOH’s lighting infrastructures, along with optical receivers, for making VLC systems is particularly challenging in outdoor environments. The potential applications of OCC in these scenarios are related to the creation and improvement of communication networks for the vehicular and pedestrian infrastructures [
13], where a large number of LED lights and CMOS cameras can be found. The desirable distance coverage of the different services that can take advantage of OCC ranges from a few meters for hand-held receiver devices based on smartphones, and tens of meters for vehicular networks that support Intelligent Transportation Systems (ITS). The achievable link distance in OCC depends partly on the signal-to-noise ratio (SNR) at the receiver, which in turn depends on the transmitted power, the attenuation caused by the channel, the optical lens array of the camera and various sources of noise and interference. In the case of RS-based systems, the maximum link distance is also restricted by the number of lines of pixels covered by the transmitter. For this, the geometry of the transmitting surface, as well as the image forming lens array configuration, determine the image area in pixels [
14]. The modulation and packet scheme may have an impact on the maximum link distance if the image frames must contain a number of visible symbols for demodulation. Depending on the case of application, the LED and camera-based transceivers can either have static or mobile positions and orientations, making mobility support essential, which relies on the effective detection of the pixels that have an SNR level suitable for demodulation.
The vehicular VLC (VVLC) are a significant application case with challenging conditions of relative position and motion between nodes. An analysis based on a comparison of VVLC with radio frequency (RF) vehicle-to-vehicle (V2V) links in terms of channel time variation was proposed in [
15]. It was shown that the VVLC links have much slower channel time variation as compared to RF V2V links. On the other hand, the VVLC investigation in [
16] obtained link duration for VVLC between neighboring vehicles are more than 5 s while in certain other cases the average link duration can be up to 15 s. The safety regulations in [
17,
18] provide the speed limits and inter-vehicle distance in different weather conditions for the estimation of the desired distance of coverage.
Table 1 shows the speed limit based on the European Commission regarding mobility and transport standards, which may vary slightly from one European country to the other. The inter-vehicle distances outlined have been calculated based on the 2 s driving rule for good to bad weather conditions, according to the Government of Ireland, which recommends that a driver maintains a minimum of two seconds apart form the leading vehicle for good weather conditions, which is doubled to four seconds in bad weather.
The performance of intensity-modulation and direct-detection method employed by LED-to-Photodiode (PD) VLC [
9,
19], is highly restricted by external light sources such as sunlight, public lighting, and signaling. Moreover, weather conditions, such as the presence of fog, or high temperatures, cause substantial optical distortions [
20]. Addressing these challenges, authors in [
21] derived a path-loss model for PD-to-LED VLC using Mie’s theory and simulating rain and fog conditions in a vehicular VLC setting. They determined the maximum achievable distances as a function of the desired bit-error-ratio (BER) using pulse amplitude modulation (PAM). They found that, for a 32-PAM system, the maximum distance achievable for the desired BER of
is reduced from 72.21 m in clear weather, to 69.13 m in rainy conditions, and 52.85 m and 25.93 m in foggy conditions of different densities. The same Mie’s theory is also used in [
22] to evaluate a PD-based VLC link under maritime fog conditions. Scattering and phase functions are derived, as well as the spectrum of the attenuation of optical signals for different distances. In [
23], the authors experimented with a LED-to-PD VLC link of 1 m distance based on a single 1 W red LED and multiple PDs attached to a Fresnel lens under dense fog conditions in a laboratory chamber. The lens allows them to maintain a 25 dB signal-to-noise ratio (SNR) varying the optical gain it provides to compensate the attenuation due to the fog presence.
Atmospheric turbulence, and oceanic turbulence in the case of Underwater Wireless Optical Communication (UWOC), has been extensively studied. Guo et al. introduced the traditional lognormal model into a simulated VLC link for ITS [
24]. The authors proved that VLC wavelengths in ITS performed worse than longer ones (e.g., 1550 nm), which is straightforward, taking into account that the turbulence measured by Rytov’s variance has a dependence on the wavelength. In the case of UWOC, in which the use of visible-range wavelengths is mandatory due to the water absorption spectrum, Kolmogorov’s turbulence spectrum is substituted by Nikishov’s [
25]. This turbulence spectrum fits better with the experimental measurements since it takes into account not only temperature but salinity variations.
Although the impact of turbulence has been characterized for classical optical detectors, its effect on OCC systems has not been adequately addressed yet. Works addressing channel characterization in outdoor OCC links [
20] are still scarce compared to the amount of research on PD-based VLC. In the previous work [
26], we evaluated the feasibility of a global shutter-based OCC link under fog conditions by the success rate of bits of vehicular link experimentally tested with a red brake light and a digital reflex camera. For a modulation index of 75%, the system showed high reliability under dense fog conditions up to a meteorological visibility of 20 m.
The contribution of this paper is to experimentally derive the feasibility of OCC in emulated outdoor conditions of fog and heat-induced turbulence using commercially available LEDs and cameras. This work is the first to report an experimental investigation on the effects of such conditions on an RS-based system. The experiments carried out for this work were done using a laboratory chamber, and the conditions emulated were of heat-induced turbulence and the presence of fog in the air. The refractive index structure parameter (
) [
27] is used to estimate the level of turbulence and the meteorological visibility (
) as a measure of the level of fog. The fog experiments are especially relevant because we utilize the camera’s built-in amplifier to overcome the fog attenuation and mitigate the relative contribution of the quantization noise induced by the analog-to-digital conversion stage, ensuring an improvement of the signal quality without increasing the exposure time, and, thus, keeping a high bandwidth.
This paper is structured as follows.
Section 2 describes the used methodology, including the channel modeling, the model for the meteorological phenomena studied, and it presents the experimental design.
Section 3 presents the experimental setup, describing the laboratory chamber and the OCC system employed.
Section 4 shows the obtained results for heat-induced turbulence and fog experiments and performs an in-depth discussion. Finally, conclusions are drawn in
Section 5.
2. Methodology
In this section, we describe the relevant processes involved in the CMOS camera mechanism of acquisition in RS-based OCC employed by our system and derive the analytical tools used for the evaluation of its performance in the experimental setting.
2.1. Channel Modelling
In CMOS image sensors, the red-green-blue (RGB) light from a Bayer filter impinges the subpixels. These entities are integrated by PDs and their driving circuit and are grouped by rows connected in parallel to amplifiers and analog/digital converter (ADC) units that are shared by columns. The output of these hardware blocks are image matrices that are sent to the camera digital signal processor (DSP), where data is compressed and delivered to the user as a media file. The sensor performs RS acquisition, in which the start and end of the exposure of each row of pixels are determined by the circuit’s fixed row-shift time (
) and the software-defined exposure time (
) [
28]. The time parameters and circuitry mentioned are shown in
Figure 1. Since
is fixed, in order to increase the data rate,
must be set as low as possible to make the sensor capture the highest diversity of states of the transmitter within each frame. The received power
at a camera coming from a Lambertian light source of order
m and transmitted power
can be expressed as
where
and
are the emission and incident angles, respectively,
is the area of the camera’s external lens, and
d is the link span. From the RS mechanism shown in
Figure 1, we can express the energy
captured by the
row as
where
h (columns),
v (rows) are the dimensions of the image sensor, and
is the mask of pixels where the source shape is projected. From the integral limits, it can be derived that the bandwidth of the
system decreases with the augment of the exposure time. In other words, the longer is
, the more lines are simultaneously exposed, and the received signal is integrated in longer and less diverse time windows. For this reason, frames in OCC have to be acquired within short periods.
Note that low values of
, along with the attenuation factor in outdoor channels caused by the presence of particles such as fog or by the light refraction by turbulence can result in
lower than the sensor’s lowest threshold of detection. For overcoming this, we can take advantage of the amplifying stage in the subpixel circuitry shown in
Figure 1. The voltage-ratio gain
of the column amplifier block behaves as
where
is the voltage obtained from the pixel integration of light during exposure, and
is the voltage value that is sampled by the ADC. In the case of the IMX219 sensor and of other CMOS sensors with the architecture shown in
Figure 1, a software-defined analog gain configuration can set the value of
for each capture. The typical values of
range from 0 dB to 20.6 dB, as shown in [
29].
The column gain
of the CMOS sensor amplifies the received signal
and all the noises up to the ADC. This includes the shot noise at the PD, and the thermal noises of the circuits, which can be modeled as random variables of Normal distributions with variances
, and
, as:
where
is the Boltzmann’s constant,
is noise temperature,
is the bandwidth,
is the electron charge,
is the dark current of the camera’s pixels, and
is the PD current at pixel
in the color band
. This current is determined by the emitted spectrum of the light source, the corresponding Bayer filter, and the substrate’s responsivity. Finally,
models the contribution of the background illumination level to the shot noise. Nonetheless, since reduced exposure times are generally used, the contribution of
can be neglected.
The signal is then sampled by the ADC, introducing quantization noise (
), that is usually modeled as a zero-mean random normal contribution whose variance depends on the resolution of the converter. This results in the SNR, which is referred at the DSP’s input as:
Considering the SNR as a function of , it can be observed that it has an increasing behaviour with an upper asymptote given by . Especially in the cases when the signal entering the ADC is weak, e.g., as in high attenuation scenarios such as in the presence of dense fog, the relative loss due to quantization noise can be minimized by increasing the column amplification. In other words, the SNR can be optimized by the camera analog gain, unless the ADC is saturated.
Our system employs an On-off keying (OOK) modulation for each of the color bands with a fixed data input that is used as a beacon signal. For bit error ratio (BER) derivation, let us assume the system now works with a random data input of
as the probabilities of value 0 and 1, respectively. The Maximum Likelihood Estimator (MLE) threshold
at the detection stage of the OOK demodulation is given by
where
and
are the expected values of the received signal for the cases of transmitted signal equal to bits 0 and 1, respectively. If the receiver’s DSP applied a digital gain
, the resulting MLE threshold would be
. In this case, if
, where
is the bit depth, and
is the maximum digital value of the signal coming from the ADC, the BER would tend to the worst case of a coin flip (error probability equal to 0.5).
2.2. Meteorological Phenomena
The presence of fog particles and turbulence in the air are known as relevant sources of signal distortion in outdoor optical systems. These conditions can be emulated in a laboratory chamber, and well-known parameters can estimate their degree, as explained in the following derivations.
Beer’s law [
30] can describe the attenuation of propagating optical signals caused by fog. Generally, in optical systems, visibility
in km is used to characterize fog attenuation (
). Using the Mie’s scattering model [
31],
can be related to
as:
where
denotes wavelength in nm and parameter
q is the distribution size of scattering particles given by Kim’s model [
32], which is in the short range of visibility (<0.5 km) considered equal to zero. Thus,
is given by:
The channel coefficient for fog
can be determined by applying Beer’s law describing light scattering and absorption in a medium as:
Consequently, the average received optical power for the LOS link at the
under fog is expressed as:
where
denotes the addition of noises associated with
and
.
The coefficient depends on the value of the product of fog-attenuation and distance (), which is known as the optical density of the link. This variable can have the same value for different combinations of fog level and link span, allowing to infer the influence of both variables varying only one of them.
The heat-induced turbulence of air results from variations in temperature and pressure of the atmosphere along the path of transmission. Consequently, this leads to variations of the refractive index of the air, resulting in amplitude and phase fluctuations of the propagating optical beam [
33]. For describing the strength of atmospheric turbulence, the parameter most commonly used is the refractive index structure parameter (
) (in units of m
) [
34,
35], given by:
where
T represents temperature in Kelvin,
P is pressure in millibar,
is the temperature structure parameter which is related to the universal 2/3 power law of temperature variations [
35] given by:
where
is the temperature difference between two points separated by distance
, while the outer and inner scales of the small temperature variations are denoted by
and
, respectively.
2.3. Experimental Design
For the OCC system to be tested under emulated meteorological phenomena, the following conditions were considered. The signal transmitted by the VLC lamp was chosen to be a repetitive beacon, formed by a sequence of on-off pulses of each of the RGB channels, and followed by a black (off state) pulse denoted as K, then, the beacon was arbitrarily set to the following: G-R-B-K. The K pulse allows measuring the dark intensity in the pixels that cover the lamp image, while the pure color pulses allow to estimate the inter-channel cross-talk between the LED RGB colors and the RGB subpixels of the camera, as explained in our previous work [
36]. The
camera equipment was configured to take captures with fixed
and different
sequentially. After taking reference measurements, the atmospheric conditions were emulated while the beacon transmission and capture processes were sustained. The reference and test image sequences are processed through the stages shown in
Figure 2, including the extraction of relevant pixels area in the picture, the estimation and enhancing of inter-channel cross-talk, and finally, the computation correlation between the signals obtained in clear conditions and under emulated weather conditions.
The extraction of the relevant group of pixels in OCC image frames, known as Region of Interest (ROI) detection, consists of locating the projection of the source in the image. In this case, we first manually locate and extract the ROI from the reference sequence. Then, since the test group is taken with the same alignment, the ROI stays fixed. Thus, the same coordinates of it are re-utilized. The pixels containing data are then averaged by row, giving the three-channel (RGB) signal , where N is the number of rows of the ROI. From the reference ROI, a template of one G-R-B-K beacon signal is saved as , where M is the number of rows used by one beacon in the RS acquisition.
As shown in previous work [
36], the inter-channel cross-talk (ICCT), which is caused by the mismatch between the LEDs and the camera’s Bayer filter spectra, is estimated from clear frames and then compensated in all datasets. We separately analyze R, G, and B pulses from the beacon signal. A matrix
is obtained by averaging the contribution of each pure-LED pulse at the three RGB subpixels. In other words, a component
from
is the average measure from the
subpixel when the
LED is illuminating it, where
. The inverse matrix
is used to clean all the datasets from ICCT found at this configuration. Finally, ICCT cleaned signals
and
are compared using the Pearson’s correlation coefficient
, which is defined as:
where
are the reference sample points from
R, of size
N,
are
N consecutive samples of
T, and
are the mean values. The correlation is calculated for all possible consecutive subsets
and the maximum value
is considered the similarity of the frame compared to the reference.
4. Results
In this section, we show the results from the analysis of the images obtained from heat-turbulence and fog experiments carried out, as shown in
Section 3. The maximum values of the correlation coefficient were computed between the ICCT-compensated reference image sequence and the images captured under different conditions, as explained in
Section 2. The
values obtained are analyzed together with the experimental parameters set:
in the case of heat-induced turbulence, and
,
, in the case of fog.
4.1. Heat-Turbulence Experiments
The heat-turbulence experiment’s reference image sequence was captured using the chamber heaters off at a stabilized laboratory temperature of 21.7 C. Thus, the template signal extracted from these captures is the result of operating the system under a negligible level of turbulence. The remaining test image sequence was captured under the thermal influence of channel in two parts, one under a higher laboratory temperature of 32.3 C, and a second part with the heaters of the chamber working at full power, setting another turbulence level. The parameter value is then calculated using the temperature sensors samples. The values between the frames of the test image sequence and the template are calculated. With these values, we infer the influence of this phenomenon.
The refractive index structure parameter values during the first part of the test image sequence capture ranged from
m
to
m
in high room temperature with the heaters off. In the second part, the range of turbulence increased to
m
m
. The obtained
between the signals from each part of the experiment and the template are shown as histograms in
Figure 5. To estimate the similarity between the
data from the reference and from each part of the test image sequence, a Kolmogórov-Smirnov (KS) statistical test was done, which consists of a non-parametric tool that estimates if two data sets are samples from the same distribution with a confidence
p-value [
38]. The result is that the first part of the test image sequence has
confidence value of having the same distribution as the reference, and the second has
. It can be seen an almost negligible influence of turbulence on OCC systems.
The different ranges of turbulence analyzed presumably have the same distribution of values, according to the KS statistical test, and also the vast amount of them meet that , which means that the experimental setup’s behavior is considerably similar to the reference, regardless of the turbulence ranges that were induced. This robustness of the system can be attributed to the short link distance and the big field of view of the camera. Both make the refraction effects unnoticeable in the received signal of our system.
4.2. Fog Experiments
For the fog emulation experiment, the reference image sequence was taken under clear air in the laboratory chamber while the optical power meter measured the power of the laser without fog attenuation. The test image sequence was taken while the chamber was arbitrarily supplied with fog from the Antari F-80Z, while the laser power was measured in synchronicity in order to label each image with the current . The value of of the images was sequentially modified from 0 to 16 dB by steps of 1 dB during the test image sequence, while for the reference, it was set to zero as default.
The
values obtained for the test images sequence varying
and
are shown as a contour plot in
Figure 6. The high correlation area (
) determines three important regions (highlighted in
Figure 6 by dashed circles). For the high values of visibility, the signal coming from the transmitter is not affected by the fog attenuation and is received with the highest power. Then, the increase of gain causes saturation of the ADC, affecting the correlation. In the low visibility region, the presence of dense fog attenuates the received signal and lowers the correlation. It can be seen that, in this low-visibility region, the increase of gain gives a high correlation, meaning that the camera amplifier compensates the attenuation from fog. The region in between, around 50 m visibility, shows high values of correlation regardless of the variations of gain. The three regions described are shown in
Figure 7, and a non-parametric locally estimated scatterplot smoothing (LOESS) regression [
39] is performed with parameter span
to show the trend of the data points. Examples of the ROI extraction from test images sequence are included to depict the effect of visibility and gain over the frames.
From the minimum gain values in the area of
, an optimum gain curve
is derived providing that there is an inverse proportionality relationship between meteorological visibility and camera gain as follows:
where
is an empirical parameter. Using curve fitting, the value
dB·km was derived for our experimental setup.
In order to calculate the SNR from the empirical data obtained, we have considered that OOK modulation is used. The following approximation of the SNR has been derived (note the
factor due to OOK):
where
comprises the samples of pixels that fall within the ROI mask
as described in Equation (
2), which was determined from reference images and since the
and
are static it is the same for the whole experiment.
, and
denote the statistical expected value and variance, respectively.
The empirical SNR definition was calculated for all the image sequences of the fog experiments. The results for the frames taken with
= 11 dB are shown in
Figure 8 for the three RGB channels. This value of gain was chosen because, as shown in
Figure 6, the level
= 11 dB is affected by the dense fog and also by the saturation. The SNR values in
Figure 8 are plotted against optical density in logaritmic scale. They show that higher attenuation
values, or alternatively, longer link spans, cause a decay of the SNR. Therefore, a curve fitting was carried out assuming that the SNR decays at a rate of
dB per decade of optical density, as follows:
where SNR(1) is the estimated signal-to-noise ratio at unitary optical density.
The SNR values obtained from the image sequences were also evaluated on their influence over
, as shown in
Figure 9. A LOESS regression also shows the trend of the scatterplots in the figure, and it can be seen that
increases with the SNR, except for the highest SNR values in the blue channel, which are affected by saturation of the ADC. It can also be seen that SNR values higher than 5 dB make
for most of the samples. From this, it can be concluded that
is a valid metric for the quality of the signal in OCC, although SNR is more robust.
The results obtained in this experiment show that the fog attenuation can make the power of the optical signal weaken down to the point that the noise induced by the ADC considerably affects the SNR. In other words, the conversion to digital corrupts the weak optical signal from dense fog conditions or long link spans. In these cases, the column amplifier of the camera is crucial to keep a high amplitude input at the ADC and reduce the effect of quantization.
5. Conclusions
In this paper, we presented an experimental study of the influence of two kinds of atmospheric conditions over an RS-based OCC link: the heat-induced turbulence due to random fluctuations of the refractive index of the air along the path, and the attenuation caused by the presence of fog particles in the air. The image sequences captured under the two different conditions were compared to a reference sequence of images taken under clear conditions. For this, we used the maximum value of Pearson’s correlation coefficient to determine their similarity. We have also evaluated the signal quality by the empirical SNR obtained from the image frames and showed its relationship with and its dependence on the product between fog attenuation and link span, known as the optical density. The most important findings in this work are, first, that the turbulence levels emulated do not affect the signal quality considerably. For the fog experiments, we have derived an expression for the theoretical SNR as a function of the analog camera gain, showing that a CMOS camera-based OCC system can improve the SNR by using the column amplifier. In the fog experiments, the correlation was impaired in two different cases: for high values of , when the gain is increased, the correlation drops because of the saturation of the signal, and, for low visibility, the attenuation caused by the fog impairs the similarity to the reference when the gain is low, because of the loss due to quantization noise at the ADC. It was found for the latter case that by increasing the gain of the camera, the attenuation can be compensated, allowing the OCC link to receive signal with a for values down to 10 m. Our findings show that there is an inverse proportionality relationship between the optimum camera gain and the visibility, and that the empirical SNR decays at a rate with the optical density. This utilization of the CMOS camera’s built-in amplifier opens a new possibility for OCC systems, extending the control strategy, and allowing to keep low exposure times and, thus, a high bandwidth, even in dense fog scenarios.