Tuesday, September 12, 2017

Using an inexpensive MPPT controller in a portable solar charging system

As I'm wont to do, I occasionally go backpacking, carrying (a bit too much!) gear with me - some of it being electronic such as a camera, GPS receiver and ham radio(s).  Because I'm usually out for a week or so - and also because I often have others with me that may also have battery-powered gear, there arises the need for a way to keep others' batteries charged as well.

Having done this for decades I've carried different panels with me over that time, wearing some of them out in the process, so it was time for a "refresh" and a new system using both more-current technology and based, in part, on past lessons learned.

Why 12 volt panels?

If you look about you'll find that there are a lot of panels nowadays that are designed to charge USB devices -which is fine if all you need to do is charge USB devices, but many cameras, GPS receivers, radios and other things aren't necessarily compatible with being charged from just 5 volts.  The better solution in these cases is to start out with a higher voltage - say that from a "12 volt" panel intended for also keeping a car battery afloat - and convert it down to the desired voltage(s) as needed.

After a bit of difficulty in finding a small, lightweight panel that natively produced the raw "12 volts" output from the array (actually, 16-22 volts unloaded) I found a 18 watt folding panel that weighed just a bit more than a pound by itself.  It happened to also include a USB charge socket - but can be hard to find one without that accessory!
Figure 1:
The 6-7aH LiFePO4 battery, MPPT controller and "18 watt" solar panel.
The odd shape of the LiFePO4 battery is due to its being intended to power
bicycle lighting, fitting in a water bottle holder.
Click on the image for a larger version.

By operating at "12 volts" you now have the choice of practically any charging device that can be plugged into a car's 12 volt accessory socket (e.g. cigarette lighter) and there are plenty of those about for nearly anything from AA/AAA chargers for things like GPS receivers and flashlights to those designed to charge your camera.  An advantage of these devices is that nowadays, they are typically very small and lightweight, using switching power converters to take the panels voltage down to what is needed with relatively little loss.

But there is a problem.

If you use a switching power converter to take a high voltage down to a lower voltage, it will dutifully try to maintain a constant power output - which means that it will also attempt to maintain a constant power input as well - and this can lead to a vexing problem.


Take as an example of a switching power converter that is 100% efficient, charging a 5 volt device at 2 amps, or (5 volts * 2 amps =) 10 watts.

If we are feeding this power converter with 15 volts, we need (10 watts / 15 volts =) 0.66 amps, but if we are supplying it with just 10 volts, we will need (10 watts / 10 volts =) 1 amp - all the way down to 2 amps at 5 volts.  What this means is that while we always have 10 watts with these differing voltages, we will need more current as the voltage goes down.

Now suppose that we have a 15 watt solar panel.  As is the nature of solar panels, there is a "magic" voltage at which our wattage (volts * amps) will be maximum, but there is also a maximum current that a panel will produce that remains more or less constant, regardless of the voltage.  What this means is that if our panel can produce its maximum power at 15 volts where it is producing 1 amps, if we overload the panel slightly and cause its voltage to go down to, say, 10 volts, it will still be producing about 1 amp - but only making (10 volts * 1 amp =) 10 watts of power!  Clearly, if we wish to extract maximum power to make the most of daylight we will want to pick the voltage at which we can get the maximum power.

Dealing with "stupid" power converters:

Suppose that, in our example, we are happily producing 10 watts of power to charge that 5 volt battery at 2 amps.  At 15 volts, we need only 0.66 amps to get that 10 watts, but then a black cloud comes over and the panel can now produce only 0.25 amps.  Because our switching converter is "stupid", it will always try to pull 10 watts - but when it does so, the voltage on its input, from the panel, will drop.  In this scenario, our voltage converter will pull the voltage all of the way down to about 5 volts - but since the panel can only produce 0.25 amps, we will be charging with only (5 volts * 0.25 amps =) 1.25 watts.


Now, the sun comes out - but the switching converter, being stupid, is still trying to pull 10 watts, but since it has pulled the voltage down to 5 volts to charge the battery, we will need 2 amps to feed the converter the 10 watts that it will need to be happy, but since our panel can never produce more than an amp, it will be stuck there, forever, producing about only (5 volts * 1 amp =) 5 watts.

If we were to disconnect the battery being charged momentarily so that the switching converter no longer saw its load and needed to try to output 10 watts, the input voltage would go back up to 15 volts - and then when we reconnected the battery, it would happily pull 0.66 amps at 15 volts again and resume charging the battery at 10 watts - but it will never "reset" itself on its own.

What this means is that you should NEVER connect a standard switching voltage converter directly to a solar panel or it will get "stuck" at a lower voltage and power if the available panel output drops below the required load - even for a moment!

Work-arounds to this "stuck regulator" problem:


The Linear regulator

One obvious work-around to this problem where a switching regulator gets "stuck" is to simply avoid using them, instead using an old-fashioned linear regulator such as an LM317 variable regulator or a fixed-voltage regulator in the 78xx series.  This type of regulator, if outputting 1 amp, will also require an input of 1 amp, the difference in voltage being lost as heat.  If a black cloud comes over - or it is simply morning/evening with less light - and the panel outputs less current, that lower current will simply be passed along to the load.

The problem with a linear regulator is that it can be very inefficient, particularly if the voltage is being dropped significantly.  For example, if you were to charge the 5 volt device at 1 amp from a panel producing 15 volts, your panel would be producing (15 volts * 1 amp =) 15 watts, you would be charging your device at (5 volts * 1 amp =) 5 watts, but your linear regulator would be burning up 10 watts of heat, wasting most of the energy.  On the up side, it simply cannot get "stuck" like a switching converter, it is very simple, it will cause no radio interference, and it is nearly foolproof in its operation.

Figure 2:
The front of the EvilBay "5 amp MPPT charger".  This unit is
an inexpensive unit that used the "Constant Voltage" algorithm (see
below) and designed primarily to charge lithium chemistry batteries.
One of the potentiometers is used to set the final charge voltage - between
14.2 and 14.6 volts for a "4 cell" LiFePO4 - and the other is set to the "maximum
power voltage" of the panels to which it is connected.  This unit- as do most
inexpensive units -require that the MPPT voltage of the panels be 2-3 volts
higher than the final charge voltage of the battery being charged.
Click on the image for a larger version.

MPPT power controller

A better solution in terms of power utilization would be to use a more intelligent device such as an MPPT (Maximum Power Point Tracking) regulator.  This is a "smarter" version of the switching regulator that, by design, avoids getting "stuck" by tracking how much power is actually available from the solar panel and never tries to pull more current than is available.  For this discussion we'll talk about the two most common types of MPPT systems.

"Perturb and Observe" MPPT:

This method monitors both the current and voltage being delivered by the panel and internally, calculates the wattage (e.g. volts * amps) on the fly and under normal conditions, and it will change the amount of current that it is trying to pull from the panel up and down slightly to see what happens, hence the name "Perturb and Observe" (a.k.a. "P&O").

For example, suppose that our goal is to get the maximum amount of power and our panel is producing 15 volts at 1 amp, or 15 watts.  Now, the MPPT controller will try to pull, say, 1.1 amps from the panel.  If the panel voltage drops slightly to 14.5 volts so we are now supplying (1.1 amps * 14.5 volts =) 15.95 watts and we were successful in pulling more power to be delivered to our load.  Now, it will try again, this time to pull 1.2 amps from the panel, but it finds that when it does so the panel voltage drops to 12.5 volts and we are now getting (1.2 amps * 12.5 volts =) 15 watts - clearly a decrease!  Realizing its "mistake" it will quickly go back to pulling 1.1 amps to get back to the setting where it can pull more power.  After this it may reduce its current to 1 amp again to "see" if things have changed and whether or not we can get more power - or if, perhaps, the amount of sunlight has dropped - such that trying to pull less current is the optimal setting.

By constantly "trying" different current combinations to see what provides the most power it will be able to track the different conditions that can affect power output of the solar panel - namely the amount of sun hitting it, the angle of that sun and to a lesser extent, the temperature of the solar panel.

Figure 3:
Curves showing the voltage versus current of a typical solar cell.  Once
the current goes above a certain point, the voltage output of a cell
drops dramatically.  The squiggly, vertical line indicates where
the maximum power (e.g. volts * amps) is obtained along the curve.
The upper graphs depict a typical curve with larger amounts
of light while the lower graphs are for smaller amounts of
impinging light.
This graph is from the Wikipedia article about MPPT - link
Click on the image for a slightly larger version.
"Constant Voltage" MPPT:

If you look at the current-versus-voltage curve of a typical solar panel as depicted in Figure 3 you'll note that there is a voltage at which the most power (volts * amps) can be produced (the squiggly vertical line) - a value typically around 70-80% of the open-circuit voltage, or somewhere in the area of 15-18 volts for a typical "12 volt" solar panel made these days.

Note:
Many "12 volt" panels currently being made are intended for use with MPPT controllers and have a bit of extra voltage "overhead" as compared to "12 volt" panels made many years ago before MPPT charging regimens were common.  What this means is that a modern "12 volt" panel may have an maximum power point voltage of 16-17 volts as opposed to 14-15 volts for an "older" panel made 10+ years ago.

One thing that you might notice is that, at least for higher amounts of light, the optimal voltage for maximum power for our hypothetical is about the same - approximately 0.45 volts per cell.  We can, therefore, design an MPPT circuit that is designed to cause the panel to operate only at that optimum voltage:  If the sunlight is reduced and the voltage starts to drop, the circuit will decrease the current it is pulling, but if the sunlight increases and the voltage starts to rise, it will increase the current to pull the voltage back down.


This method is simpler and cheaper to implement than the "Perturb and Observe" method because one does not need to monitor the current from the panel (e.g. it cares only about the voltage) and there does not need to be a small computer or some sort of logic to keep track of the previous adjustments.  For the Constant Voltage (e.g. "CV") method the circuit does only one thing:  Adjust the current up and down to keep the panel voltage constant.

As can be seen from Figure 3, the method of using "one voltage for all situations" is not optimal for all conditions as the voltage at which the most power can be obtained changes with the amount of light, which also changes with the temperature of the panel, age, shading, etc.  The end result of this rather simplistic method of optimization is that one ends up with somewhat lower efficiency overall - around 80% of the power that one might get with a well-performing P&O scheme according to some research. Ref. 1

This method can be optimized somewhat if the circuit is adjusted for maximum power output under "typical" conditions that one might encounter.  For example, if the CV voltage is adjusted when the panel is under (more or less) maximum sun on a typical day, it will produce power most efficiently when the solar power production is at its highest and making the greatest contribution to the task at hand - such as charging a battery.  In this case, it won't be as well optimized as well when the illumination is lower (e.g. morning or evening) but because the amount of energy available during these times will be lower anyway, a bit of extra loss from the lack of optimization at those times will be less significant than the same percentage of loss during peak production time.

Despite the lower efficiency, the Constant Voltage method is often found as a single-chip solution to implement low-cost MPPT, providing better performance than non-MPPT alternatives.

Actual implementation:

I was able to find an inexpensive (less than US$10, shipped) MPPT charge control board on EvilBay (called "5A MPPT Solar Panel Regulator Battery Charging") that was adjustable to allow its use with solar panels with open-circuit voltages ranging from 9 to 28 volts and its output being capable of being adjusted from 5 to about 17 volts.  This small board had built-in current regulation set to a maximum of 5 amps - more than adequate for the 18 watt panel that I would be using.

From the pictures on the EvilBay posting - and also once I had it in-hand - I could see that it used the Consonance CN3722 MPPT chip. Ref. 2  This chip performs Constant Voltage (CV) MPPT functions and provides a current-regulated output with the components on the EvilBay circuit board limiting the current to a maximum of 5 amps.  Additionally, this board, when used to charge a battery directly, may be adjusted, using onboard potentiometers, to be optimized for the solar panel's Maximum Power voltage (called "Vmp" in the panels' specifications) and adjusted for the finish charge voltage for the battery itself, being suitable for many types of Lithium-Ion chemistries - including the "12 volt" LiFePO4 that I was going to use.
Figure 4:
The back side of the MPPT controller showing the heat sink and connections.
The heat sink is adequate for the ratings of this unit.  To save weight and bulk,
the unit was not put in a case, but rather the wires "zip tied" to the mounting
holes to prevent fatiguing of the wires - and to permit the wires themselves to
to offer a bit of protection to the top-side components.
Click on the image for a larger version.

To this end, my portable charging system consists of the solar panel, this MPPT controller and a LiFePO4 battery to provide a steady bus voltage compatible with 12 volt chargers and devices.  By including this "ballast" battery, the source voltage for all of the devices being charged is constant and as long as the average current being pulled from the battery is commensurate with the average solar charging current, it will "ride through" wide variations in solar illumination.  This method has the obvious advantage that a charge accumulated throughout the day can be used in the evening/night to charge those devices or even be used to top off batteries when one is hiking and the panel may not be deployed.

Tweaking the "Constant Voltage" MPPT board:

As noted, the EvilBay CN3722 board had two trimmer potentiometers:  One for setting the output voltage - which would be the "finish" charge voltage for the battery and another for setting the Constant Voltage MPPT point for the panel to be used.

Setting the output voltage is pretty easy:  After connecting it to a bench supply set for 4-6 volts above the desired voltage I connected a fairly light load to the output terminal and set it for the proper voltage.  For a "12 volt" LiFePO4 battery, this will be between 14.2 and 14.6 volts while the setting for a more conventional "12 volt" LiIon battery would be between 16.2-16.8 volts, depending on the chemistry and desired finish voltage. Ref. 3  Once this adjustment has been done I connected a fully-charged battery to the output along with a light load and power-cycled the MPPT controller and watched it as it stabilized, readjusting the voltage as necessary.

Setting the MPPT voltage is a bit tricker.  In this case, a partially discharged battery of the same type and voltage that will be ultimately used as was adjusted above is connected to the output of the MPPT controller in series with an ammeter on the output.  With the solar panel that is to be used connected and laid out in full sun, the "MPPT Voltage" potentiometer is adjusted for maximum current into the battery being charged.  Again, this step requires a partially-discharged battery so that it will take all of the charging current that is available from the panel.

Note that the above procedure presumes that the solar panel is too small to produce enough power to cause the MPPT battery charger itself to go into current limiting - in which case, the current limit is that of the panel itself - which means that the maximum current that is seen at the charging terminal of the battery reflects the maximum power that can be pulled from the panel.  For example, with a panel producing 18 watts and charging a battery at 13.5 volts we could only ever expect to see about 1.33 amps flowing into the battery due to the inability of the panel to supply more power, but maximizing this current by adjusting the "MPPT" voltage control permits optimization for that particular solar panel.

If the panel is large enough to cause the MPPT controller to current-limit its charging current (around 5 amps for the MPPT controller that I used) then it may be that the panel is oversized slightly for the task - at least at midday, when there is peak sun.  In that case one would make the same adjustment in the morning or evening when the amount of light was low enough that the panel could not cause the charger to current-limit or simply block a section of the panel.

While this charging board would be able to connect directly to almost any rechargeable Lithium battery, it would be awkward try to adapt it for each type of battery that one might need to charge "on the trail" so I decided to carry with me a small "12 volt" LiFePO4 battery as well:  The solar panel and MPPT controller would charge that battery and then the various lightweight "12 volt" chargers for the different batteries to be charged would connect to it.

Its worth noting that MPPT power controllers use switching techniques to do the efficient conversion of voltage.  What this means is that if, attached to - or nearby - is a sensitive radio - particularly an  HF (shortwave) transceiver - the switching operation of the MPPT controller may cause interference unless the controller is enclosed in an RF-tight box with appropriate filtering on the input and output leads.  In practice I haven't found this to be an issue as any HF operation is usually done in the evening, at camp, as things are winding down and the sun isn't out, anyway, so the unit is not in service at that time.

Final comments

While the "ballast battery" method has an obvious weight and volume penalty, it has the advantage that if you need to charge a number of different devices, it is possible to find a very small and light 12 volt "car" charger for almost any type of battery that you can imagine.  The other advantage is that with a 12 volt battery that is being charged directly from the MPPT controller, it acts as "ballast", allowing the charging of this "main" battery opportunistically with the available light as well as permitting the charging of the other batteries at any time - including overnight!

The 18 watt panel weighs 519 grams (1.14 pounds), the MPPT charge controller with attached wires and connectors weighs 80 grams (0.18 pounds), a cable connecting the panel to the MPPT controller weights 60 grams (0.13 pounds) while the 6-7 amp-hour LiFePO4 battery pictured in Figure 1 . Ref. 4  weighs in at 861 grams (1.9 pounds).   The total weight of this power system is about 1520 grams (3.35 pounds) - which can be quite a bit to carry in a backpack, but considering that it can provide the power needs of a fairly large group and that this weight can be distributed amongst several people, if necessary, it is "tolerable" for all but those occasions where it is imperative that there is the utmost in weight savings.  For a "grab and go" kit that will be transported via a vehicle and carried only a short distance this amount weight is likely not much of an issue.


* * *
References:

1 - The article "Energy comparison of MPPT techniques for PV Systems" - link - describes several MPPT schemes, how they work, and provides comparison as to how they perform under various (simulated) conditions.

2 - Consonance Electric CN3722 Constant Voltage (CV) MPPT multichemistry battery charger/regulator - Datasheet link.

3 - Particularly true for LiIon cells, reducing the finish (e.g. cut off) voltage by 5-10%, while reducing the available cell capacity, can improve the cell's longevity.  What this means is that if the cut-off voltage of a typical modern LiIon cell, which is nominally 4.2 volts, is reduced to 4.0 volts, all other conditions being equal this can have the potential to double the useful working life.  While this lower cut off voltage may initially reduce the available capacity by as much as 25%, a cell consistently charged to the full 4.2 volts will probably lose this much capacity in a year or so, anyway whereas it will lose much less capacity than that at the lower voltage.  For additional information regarding increasing the longevity of LiIon cells see the Battery University web page "How to Prolong Lithium-based Batteries" - link and its reference sources.

4 - This LiFePO4 battery has been featured several times before - see these links:
  • Problems with LiFePO4 batteries - link
  • Follow-up:  LiFePO4 batteries revisited - equalization of cells - link
 

    [End]

This page stolen from "ka7oei.blogspot.com".
 

Monday, August 28, 2017

Monitoring the "CT" MedFER beacon from "Eclipse land"


Figure 1:
The MedFER beacon, on the metal roof of my house,
attached to an evaporative ("swamp") cooler.
I must admit that I was "part of the problem" - that is, one of the hoardes of people that went north to view the August 21, 2017 eclipse along its line of totality.  In my case I left my home near Salt Lake City, Utah on the Friday before at about 4AM, arriving 4 hours and 10 minutes later - this, after a couple of rest and fuel stops.  On the return trip I waited until 9:30 AM on the Wednesday after, a trip that also took almost exactly 4 hours and 10 minutes, including a stop or two - and I had no traffic in either case.

This post isn't about my eclipse experiences, though, but rather the receiving of my "MedFER" beacon at a distance of about 230 miles (approx. 370km) as a crow flies.

What's a MedFER beacon?

In a previous post I described a stand-alone PSK31 beacon operating just below 1705 kHz at the very top of the AM broadcast ("Mediumwave") band under FCC Part 15 §219 (read those rules here).  This portion of the FCC rules allow the operation of a transmitter on any frequency (barring interference) between 510 and 1705 kHz with an input power of 100 milliwatts using an antenna that is no longer than 3 meters, "including ground lead."  By operating just below the very top of the allowed frequency range I could maximize my antenna's efficiency and place my signal as far away from the sidebands and splatter of the few stations (seven in the U.S. and Mexico) that operate on 1700 kHz.
Figure 2:
Inside the loading coil, showing the variometer, used to fine-
tune the inductance to bring the antenna system to
resonance.  This coil is mounted in a plastic 5-gallon
bucket, inverted to protect it from weather.

As described in the article linked above, this beacon uses a Class-E output amplifier which allows more than 90% of its DC input power to be delivered as RF, making the most of the 100 milliwatt restriction of the input power.  To maximize the efficiency of the antenna system a large loading coil with a variometer is used, wound using copper tubing, to counteract the reactance of the antenna.  The antenna itself is two pieces:  A section, 1 meter long, mounted to the evaporative cooler sitting on and connected to the metal roof of my house and above that, isolated from the bottom section is an additional 2-meter long section that is tophatted to increase the capacitance and reduce the required amount of loading inductance to improve overall efficiency.

As it happens, the antenna is mounted in almost exactly the center of the metal roof of my house so one of the main sources of loss - the ground - is significantly reduced, but even with all of this effort the measured feedpoint resistance is between 13 and 17 ohms implying an overall antenna efficiency of just a few percent at most.

Figure 3:
The antenna, loading coil and transmitter, looking up from the base.  In
the extreme foreground in the lower right-hand corner of the picture can
be seen the weather-resistant metal box that contains the transmitter.
Originally intended only as a PSK31 beacon I later added the capability of operating on 1700 kHz using AM and being able to do on/off keying of the carrier at the original "1705" kHz PSK31 frequency, permitting the transmission of Morse code messages.  For the purpose of maximizing the likelihood of the signal being detected, this last mode - Morse - I operate using "QRSS3", a "Slow" Morse sending speed where the "dit" length of the characters is being transmitted is 3 seconds - as is the space between character elements - and a "dah" and the space between characters themselves is 9 seconds.

Sending Morse code at such a low speed allows sub-Hz detection bandwidths to be used, greatly improving the rejection of other signals and increasing the probability that the possibly-minute amount of detected energy may be detected.

Detecting it from afar:

Even though this beacon had been "received" as far away as Vancouver, BC (about 800 miles, or 1300 km) using QRSS during deep, winter nights, I was curious if I could hear it during a summer night near Moore, ID at that 230 mile (370km) distance.  Because we were "camping" in a friend's yard, we (Ron, K7RJ and I) had to put up an antenna to receive the signal.

The first first antenna that we put up received strong AC mains-related noise - likely because it paralleled the power line along the road.  Re-stringing the same 125-ish feet (about 37 meters) of antenna wire at a right angle to the power line and stretching out a counterpoise along the ground got better results:  Somewhat less power line noise.  It was quickly discovered that I needed to run both the receiver and the laptop on battery as any connection to the power line seemed to conduct noise into the receiver - probably a combination of noise already on the power line as well as the low-level harmonics of the computer's switching power supply.

I'd originally tried using my SDR-14 receiver, but I soon realized that between the rather low signal levels being intercepted by the wire - which was only about 10 feet (3 meters) off the ground - and the relative insensitivity of this device I wasn't able to "drive" its A/D converter very hard, resulting in considerable "dilution" of the received signals due to quantization noise.  In other words, it was probably only using 4-6 bits of the device's 14 bit A/D converter!

I then switched to my FT-817 (with a TCXO) which had no troubling "hearing" the background noise.  Feeding the output of the '817 into an external 24 bit USB sound card (the sound card input of my fairly high-end laptop - as with most laptops - is really "sucky") I did a "sanity check" of the frequency calibration of the FT-817 and the sound card's sample rate using the 10 MHz WWV signal and found it to be within a Hertz of the correct frequency and then re-tuned the receiver to 1704.00 kHz using upper-sideband.  It had been several years since I'd measured the precise frequency of my MedFER beacon's carrier, last being observed at 1704.966 kHz, so I knew that it would be "pretty close" to that value - but I wasn't sure how much its crystal might have drifted over time.

For the signal analysis I used both "Spectrum Lab" by DL4YHF (link here) and the "Argo" program by I2PHD (link here).  Spectrum Lab is a general-purpose spectral analysis program with a lot of configurability which means that there are a lot of "knobs" to tweak, but Argo is purposely designed for modes like QRSS using optimized, built-in presets and it was via Argo that I first spotted some suspiciously coherent signals at an audio frequency of between 978 and 980 Hz, corresponding to an RF carrier frequency of 1704.978 to 1704.980 kHz - a bit higher than I'd expected.

As we watched the screen we could see a line appear and disappear with the QSB (fading) and we finally got a segment that was strong enough to discern the callsign that I was sending - my initials "CT".

Figure 4
An annotated screen capture of a brief reception, about 45 minutes after local sunset, of the "CT" beacon using QRSS3 with the "oldest" signals at the left.  As can be seen, the signal fades in so that the "T" of a previous ID, a complete "CT" and a partial "C" and a final "T" can be seen on the far right.  Along the top of the screen we see that ARGO is reporting the peak signals to be at an audio frequency of 978.82 Hz which, assuming that the FT-817 is accurately tuned to 1704.00 kHz indicates an actual transmit frequency of about 1704.979 kHz.

As we continued to watch the ARGO display now and again we could see the signal fade in and out and be occasionally clobbered by the sidebands of an AM radio station on 1700 kHz - at least until something was turned on in a nearby house that put interference everywhere around the receive frequency.

The original plan:

The main reason for leaving the MedFER beacon on the air during the eclipse and going through the trouble of setting up an antenna was to see if, during the depth of the eclipse, its signal popped up, out of the noise - the idea being that the ionospheric "D" layer would disassociate in the temporary darkness along the path between my home where the eclipse would attain about 91% totality and the receive location within the path of totality, hoping that its signal would emerge.  In preparation for this I set up the receiver and the ARGO program to automatically capture - and then re-checked it about 5 minutes before totality.

Unfortunately, while I'd properly set up ARGO to capture, I'd not noticed that I'd failed to click on the "Start Capturing" button in ARGO and the computer happily ran unattended until, perhaps, 20 minutes after totality, so I have no way of knowing if the signal did pop up during that time.  I do know that when I'd checked on it a few minute before totality there was no sign of the "CT" beacon on the display.

In retrospect, I should have done several things differently:
  • Brought a shielded "H" loop that would offer a bit of receive signal directionality and the ability to reject some of the locally-generated noise and would have saved us the hassle of stringing hundreds of feet of wire through trees.  Some amplification with this loop would also have helped the SDR-14 work properly.
  • Actually checked to make certain that the screen capture was activated!
  • Record the entire event to an uncompressed audio (e.g. ".WAV") file so that it could be re-analyzed later.
 Oh well, you live and learn!

P.S.  After I returned I measured the carrier frequency of the MedFER beacon using a GPS-locked frequency reference and found it to be 1704.979 kHz - just what was measured from afar!

[End]

This information stolen from ka7oei.blogspot.com

Tuesday, August 15, 2017

Analyzing "fake" solar eclipse viewing glasses - how good/bad are they?

Note:  Please read and heed the warnings in this article.

About a month and a half ago I ordered some "Eclipse Viewing Glasses" from Amazon - these being those cardboard things with plastic filters.  When I got them, I looked through them and saw that they were very dark - and in looking briefly at the sun through them they seemed OK.
Figure 1:
The suspect eclipse viewing glasses.
These are the typical cardboard frame glasses with very dark plastic lenses.
Click on the image for a slightly larger version.

I was surprised and chagrined when, a few days ago, I got an email from Amazon saying that they were unable to verify to their satisfaction that the supplier of these glasses had, in fact, used proper ISO rated filters and were refunding the purchase price. This didn't mean that they were defective - it's just that they couldn't "guarantee" that they weren't.

I was somewhat annoyed, of course, that this had happened too soon prior to be able to get some "proper" glasses, but I then started thinking:  These glasses look dark - how good - or bad - are they?

I decided to analyze them.

WARNING - PLEASE READ!

What follows is my own, personal analysis of "potentially defective" products that, even when used properly, may result in permanent eye damage.  This analysis was done using equipment at hand and should not considered to be scientifically rigorous or precise.

DO NOT take what follows as a recommendation - or even an inference - that the glasses that I tested are safe, or that if you have similar-looking glasses, that they, too, are safe to use!

Figure 2:
The 60 watt LED light used for testing.  This "flashlight" consists of
a 60 watt Luminus white LED with a "secondary" lens placed in front of it.
The "primary" lens (a 7" diameter Fresnel) used to collimate the beam
was removed for this testing.
Click on the image for a larger version.
This analysis is relevant only the glasses that I have and there is no guarantee that glasses that you have may be similar.  If you choose to use similar glasses that you might have, you are doing so at your own risk and I cannot be held liable for your actions!


YOU HAVE BEEN WARNED!

White Light transmission test:

I happen to have on hand a homemade flashlight that uses a 60 watt white LED that, when viewed up close, would certainly be capable of causing eye damage when operating at full power - and this seemed to be a good, repeatable candidate for testing.  For measuring the brightness I used a PIN photodiode (a Hammatsu S1223-01) and relative measurements in intensity could be ascertained by measuring the photon-induced currents by measuring that current with and without the filter in place.

Using my trusty Fluke 87V multimeter, when placed 1/4" (about 6mm) away from the light's secondary lens I consistently measured a current of about 53 milliamps - a significantly higher current than I can get from exposing this same photodiode to the noonday sun.  In the darkened room, I then had the challenge of measuring far smaller current.

Switching the Fluke to its "Hi Resolution" mode, I had, at the lowest range, a resolution of 10 nanoamps - but I was getting a consistent reading of several hundred nanoamps even when I covered the photodiode completely.  It finally occurred to me that the photodiode - being a diode - might be picking up stray RF from radio and TV stations as well as the ever-present electromagnetic field from the wires within our houses so I placed a 0.0022uF capacitor across it and now had a reading of -30 nanoamps, or -0.03 microamps.  Reversing the leads on the meter did not change this reading so I figured that this was due to an offset in the meter itself so I "zeroed" it out using the meter's "relative reading" function.  Just to make sure that the all of the current that I was measuring was from the front of the photodiode I covered the back side with black electrical tape.
Figure 3:
A close up of the S1223-01 photodiode and capacitor in front of the LED.
The bypass capacitor was added to minimize rectification of stray RF
and EM fields which caused a slight "bias" in the low-current readings.
Click on the image for a lager version.

I then placed the plastic film lens of the glasses in front of the LED, atop the flashlights secondary lens - and it melted.

Drat!

Moving to a still-intact "unmelted" portion of the lens I held it against the photodiode this time, placing it about 1/4" away from the LED as well and got a consistent reading of 0.03-0.04 microamps, or 30-40 nanoamps.  Re-doing this measurement several times, I verified the consistency of these numbers.

Because the intensity of the light is proportional to the photodiode current, we can be reasonably assured that the ratio of the "with glasses" and "without glasses" currents are indicative of the amount of attenuation afforded by these glasses, so:

53mA = 5.3*10E-2 amps - direct LED, no glasses
40nA = 4.0*10E-8 amps - through the glasses

The ratio is therefore:

5.3*10E-2 / 4.8*10E-8 = 1325000

What this implies is that there is a 1.325 million-fold reduction in the brightness of the light. Compare this with #12 welding glass which has about a 30000 (30k)-fold reduction of visible light and the absolute minimum that is considered to be "safe" for direct viewing while #14 offers about a 300000 (300k)-fold reduction.  According to various sources (NASA, etc.) a reduction of 100000 (100k)-fold will yield safe direct viewing.  The commonly available #10 welding glass offers only "about" a 10000 (10k)-fold reduction at best and is not considered to be safe for direct solar viewing.
Figure 4:
The typical spectral output of a "white" LED (blue line) and
a typical silicon PIN photiode (black line.)  The distinct peak
is from the internal blue LED while the "yellow" Ce:YAG
phosphors emit longer wavelengths to produce a "white" light.
As can be seen, the sensitivity of the photodiode increases
with longer wavelengths while the spectral output of a white
LED drops.
Click on the image for a larger version.

This reading can't be taken entirely at face value as this assumes that the solar glasses have an even color response over the visible range - but in looking through them, they are distinctly red-orange.  What this means is that the spectrum of the white LED - which is mostly red-yellow and some blue (because white LEDs use blue LEDs and a phosphor to produce the rest of the spectrum) and very little infrared - means that we are doing a bit of apples-oranges comparison.

In addition to this, the response of the photodiode itself is not "flat" over the visible spectrum, peaking in the near-infrared and trailing off with shorter wavelengths - that is, toward the blue end.  Figure 4, above, shows the relative peak light outputs of a typical "white" LED overlaid with the response of the photodiode and once can see that they are somewhat complimentary.

To a limited degree, these two different curves will negate each other in that the sensitivity of the photodiode is a tilted toward the "red" end of the spectrum.  With the inference being that these glasses may be "dark enough", I wanted to take some more measurements.

Photographing the sun:

As it happens I have a Baader ND 5.0 solar film filter for my 8" telescope to allow direct, safe viewing of the sun via the telescope.  Because I'd melted a pair of glasses in front of the LED, I wasn't willing to make the same measurement with this (expensive!) filter so I decided to place each filter in front of the camera lens and photograph the sun using identical exposure settings as seen in Figure 5, below.

Figure 5:
The Baader filter on the left and the suspect glasses on the right.
These pictures were taken through a 200mm zoom lens using a Sigma SD-1 camera set to ISO 200 at F8 and 1/320th of a second.  Both use identical, fixed "Daylight" white balance.
Click on the image for a lager version.

What is very apparent is that the Baader filter is pretty much neutral in tone while the glasses are quite red.  To get a more meaningful measurement, I used an image manipulation program to determine the relative brightness of the R, G and B channels with their values rescaled to 8 bits:  Because the camera that I used - a Sigma SD-1 actually has RGB channels with its Foveon sensor rather than the more typical Bayer CMY matrix, these levels are reasonably accurate.  Note that the numbers below do not take "gamma" (discussed later) into account.

For the Baader filter:
  • Red = 163
  • Green = 167
  • Blue = 162
For the glasses:
  • Red = 211
  • Green = 67
  • Blue = 0 
Again, this seems to confirm that the glasses are quite red - with a bit of yellow and thrown in, which explains the orange-ish color.  Clearly, the glasses let in more red than the Baader, but the visible energy overall would appear to be roughly comparable using this method.

What the eye cannot see:

It is not just the visible light that can damage the eye's retina, but also ultraviolet and infrared and these wavelengths are a problem because their invisibility will not trigger the normal, protective pupilary response.  I have no easy way to measure the attenuation of ultraviolet of these glasses, but the complete lack of blue - and the fact that many plastics do a pretty good job of blocking UV - I wasn't particularly worried about it.  If one was worried, ordinary glasses or a piece of polycarbonate plastic would likely block much of the UV that managed to get through.

Infrared is another concern - and the sun puts out a lot of it!  What's more is that many plastics - even strongly tinted - will transmit near infrared quite easily even though they may block visible light.  An example of this are "theater gels" that are used to color stage lighting:  These gels can have a deep hue, but most are nearly transparent to infrared - and this also helps prevent them from instantly  bursting into flame when placed in front of hot lights.

Because of this I decided to include near-infrared in my measurements.  In addition to my Sigma SD-1, I also have an older SD-14 and a property of both of these cameras is that they have easily-removable "hot mirrors" which double as dust protectors.  What this means is that in a matter of seconds, one can adapt the camera to "see" infrared.  Using my SD-14 (that camera is mostly retired, and I didn't want to get dust on the SD-1's sensor) I repeated the same test with the hot mirror removed as can be seen in Figure 6.

Figure 6:
The Baader filter on the left and the glasses on the right showing the relative brightness when photographed in visible light + near infrared.
This camera, a Sigma SD-14, was set to ISO 100 at F25 and 1/400th of a second using the same 200mm lens as Figure 5.
Click on the image for a larger version.

According to published specifications (see this link) the response of the red channel of the Foveon sensor is fairly flat from about 575 to 775 nanometers and useful out a bit past 900 nanometers while the other channels - particularly the blue - have a bit of overlapping response while the hot mirror itself very strongly attenuates wavelengths longer than 675 nanometers.  What this means is that by analyzing the pictures in Figure 5, we can get an idea as to how much infrared the respective filters pass by noting the 8-bit converted RGB levels:


For the Baader filter:
  • Red = 111
  • Green = 0
  • Blue = 62
For the glasses:
  • Red = 224
  • Green = 0
  • Blue = 84 
While the camera used for figures 5 and 6 aren't the same, they use the same technology of imager which is known to have the same spectral response.  Taking into account the ISO differences, there is an approximate 3-4 F-stop difference between the two exposures (some of this is due to the fact that the morning sun was higher when the infrared pictures were taken) indicating that there is a significant amount of infrared energy - particularly manifest by the fact that the exposure had to be reduced such that the green channel no longer shows any readings when using the Baader filter.   

(Follow this link for a comparison of the transmission spectra of common filter media and follow this link for a discussion about the Baader filter in particular.)

What is clear is that the glasses let in a significant amount more infrared than the Baader filter within the response curve of the sensor - but by how much?

The data indicates that the pixel brightness of the "Red+IR" channel of the glasses is twice that of that of the Baader filter, but if one accounts for the gamma correction applied to photographic images (read about that here - link) - and presume this gamma value to be 2 - we can determine that the actual differences between the two is closer to 4:1.

What does all of this mean?

In terms of visible light, these particular "fake" glasses appear to transmit about the same amount of visible light as the known-safe Baader filter - although the glasses aren't offering true color rendition, putting a distinct red-orange cast on the solar disk.  In the infrared range - likely between 675 and 950nM - the glasses seem to permit about 4 times the light of the Baader filter.

At this point is is worth reminding the reader that this Baader filter is considered to be "safe" when placed over a telescope - in this case, my 8" telescope, as the various glass/plastic lenses along the optical path (e.g. corrector lens, eyepiece, etc.) will adequately block any stray UV.  What this means is that despite the tremendous light-gathering advantage of this telescope over the naked eye, the Baader filter still has a generous safety margin.  (It should be noted that this Baader film is not advertised to be "safe for direct viewing".  Their direct-viewing film has a stronger blue/UV and IR blocking.)

What may be inferred from this is that, based solely on the measurements that obtained with these glasses it would seem that they may let in about 4 times the amount of infrared (e.g. >675nm) light as the Baader filter.

Again, I did not have the facility to determine if these glasses adequately block UVA/B radiation - but the combination of these glasses and good-quality sunglasses will block UV A/B - and provide additional light reduction overall.

Will I use them?

Based on my testing, these particular glasses seem to be reasonably safe in most of the way that matter, but whatever "direct viewing" method that I choose (e.g. these glasses or other alternatives) I will be conservative:  Taking only occasional glances.

(I will acquire some "bona-fide" glasses and analyze them when I get a chance.)

* * *
Once again:

WARNING - PLEASE READ!
 
What preceded was my own, personal analysis of potentially defective products that, even when used properly, may result in permanent eye damage.  This analysis was done using equipment at hand and should not considered to be scientifically rigorous or precise.

DO NOT take what follows as a recommendation - or even an inference - that the glasses that I tested are safe, or that if you have similar-looking glasses, that they, too, are safe to use!

This analysis is relevant only the glasses that I have and there no guarantee that glasses that you have may be similar.  If you choose to use similar glasses that you might have, you are doing so at your own risk and I cannot be held liable for your actions!

YOU HAVE BEEN WARNED!

    [End]

This page stolen from "ka7oei.blogspot.com".

Thursday, July 20, 2017

A 173 mile (278km) all-electronics, FSO (Free Space Optical) contact: Part 1 - Scouting it out

Nearly 10 years ago - in October, 2007, to be precise - we (exactly "who" to be mentioned later) successfully managed a 173 mile, Earth-based all-electronic two-way contact between two remote mountain ranges in western Utah.

For many years before this I'd been mulling over in the back of my mind various ways that optical ("lightbeam") communications could be accomplished over long distances.  Years ago, I'd observed that even a modest, 2 AA-cell focused-beam flashlight could be easily seen over a distance of more than 30 miles (50km) and that sighting even the lowest-power Laser over similar distances was fairly trivial - even if holding a steady beam was not.  Other than keeping such ideas in the back of my head, I never really did more that this - at least until the summer of 2006, when I ran across a web site that intrigued me, the "Modulated Light DX page" written by Chris Long (now amateur radio operator VK3AML) and Dr. Mike Groth (VK7MJ).  While I'd been following the history and progress of such things all along, this and similar pages rekindled the intrigue, causing me to do additional research and I began to build things.

Working up to the distance...

Over the winter of 2006-2007 I spent some time building, refining, and rebuilding various circuits having to do with optical communications.  Of particular interest to me were circuits used for detecting weak optical signals and it was those that I wanted to see if I could improve.  After considerable experimentation, head-scratching, cogitation, and testing, I was finally able to come up with a fairly simple optical receiver circuit that was at least 10dB more sensitive than other voice-bandwidth circuits that were out there.  Other experimentation was done on modulating light sources and the first serious attempt at this was building a PIC-based PWM (Pulse-Width Modulation) circuit followed, somewhat later, by a simpler current-linear modulator - both being approaches that seemed to work extremely well.

After this came the hard part:  Actually assembling the mechanical parts that made up the optical transceivers.  I decided to follow the field-proven Australian approach of using large, plastic, molded Fresnel lenses in conjunction with high-power LEDs for the source of light emissions with a second parallel lens and a photodiode for reception and the stated reasons for taking this approach seemed to me to be quite well thought-out and sound - both technically and practically.  This led to the eventual construction of an optical transceiver that consisted of a pair of identical Fresnel lenses, each being 318 x 250mm (12.5" x 9.8") mounted side-by-side in a rigid, wooden enclosure comprising an optical transceiver with parallel transmit and receive "beams."  In taking this approach, proper aiming of either the transmitter or receiver would guarantee that the other was already aimed - or very close to being properly aimed - requiring only a single piece of gear to be deployed with precision.

After completing this first transceiver I hastily built a second transceiver to be used at the "other" end of test path.  Constructed of foam-core posterboard, picture frames and inexpensive, flexible vinyl "full-page" magnifier Fresnel lenses, this transceiver used, for the optical emitter and transmitter assemblies, my original, roughly-repackaged prototype circuits.  While it was neither pretty or capable of particularly high performance, it filled the need of being the "other" unit with which communications could be carried out for testing:  After all, what good would a receiver be if there were no transmitters?

On March 31, 2007 we completed our first 2-way optical QSO with a path that crossed the Salt Lake Valley, a distance of about 24 km (15 miles.)  We were pleased to note that our signals were extremely strong and, despite the fact that our optical path crossed directly over downtown Salt Lake City, they seemed to have 30-40dB signal-noise ratio - if you ignored some 120 Hz hum and the occasional "buzz" from an unseen, failing streetlight.  We also noted a fair amount of amplitude scintillation, but this wasn't too surprising considering that the streetlights visible from our locations also seemed to shimmer being subject to the turbulence caused by the ever-present temperature inversion layer in the valley.

Bolstered by this success we conducted several other experiments over the next several months, continuing to improve and build more gear, gain experience, and refine our techniques.  Finally, for August 18, 2007, we decided on a more ambitious goal:  The spanning of a 107-mile optical path.  By this time, I'd completed a third optical transceiver using a pair of larger (430mm x 404mm, or 16.9" x 15.9") Fresnel lenses, and it significantly out-performed the "posterboard" version that had been used earlier.  On this occasion we were dismayed by the amount of haze in the air - the remnants of smoke that had blown into the area just that day from California wildfires.  Ron, K7RJ and company (his wife Elaine, N7BDZ and Gordon, K7HFV) who went to the northern end of the path (near Willard Peak, north of Ogden, Utah) experienced even more trials, having had to retreat on three occasions from their chosen vantage point due to brief, but intense thunderstorms.  Finally, just before midnight, a voice exchange was completed with some difficulty - despite the fact that they never could see the distant transmitter with the naked eye due to the combination of haze and light pollution - over this path, with the southern end (with Clint, KA7OEI and Tom, W7ETR) located near Mount Nebo, southeast of Payson, Utah.

Figure 1:
The predicted path projected onto a combination
map and satellite image.  At the south end
(bottom) is Swasey Peak while George Peak is
indicated at the north.
Click on the image for a larger version.
Finding a longer path:


Following the successful 107-mile exchange we decided that it was time to try an even-greater distance.  After staring at maps and poring over topographical data we found what we believed to be a 173-mile line-of-sight shot that seemed to provide reasonable accessibility at both ends - see figure 1.  This path spanned the Great Salt Lake Desert - some of the flattest, desolate, and most remote land in the continental U.S.  At the south end of this path was Swasey Peak, the tallest point in the House range, a series of mountains about 70 miles west of Delta, in west-central Utah.  Because Gordon had hiked this peak on more than one occasion we were confident that this goal was quite attainable.

At the north end of the path was George Peak in the Raft River range, an obscure line of mountains that run east and west in the extreme northwest corner of Utah, just south of the Idaho boarder.  None of us had ever been there before, but our research indicated that it should be possible to drive there using a high-clearance 4-wheel drive vehicle so, on August 25, 2007, Ron and Gordon piled into my Jeep (along with a 2nd spare tire swiped from Ron's Jeep as recommended by more than one account) and we headed north to investigate.

Getting there:

Following the Interstate highway nearly to the Idaho border, we turned west onto a state highway, following it as the road swung north into Idaho, passing the Raft River range, and we then turned off onto a gravel road to Standrod, Utah.  In this small town (a spread-out collection of houses, really) we turned onto a county road that began to take us up canyons on the northern slope of the range.  As we continued to climb, the road became rougher and we resorted to peering at maps and using our intuition to guide us onto the one road that would take us to the top of the mountain range.

Luckily, our guesses were correct and we soon found ourselves at the top of the ridge.  Traveling for a short distance, we ran into a problem:  The road stopped at a fence gate that was plastered with "No Trespassing" signs.  At this point, we simply began to follow what looked like road that paralleled the fence only to discover, after traveling several hundred feet - and past a point at which we could safely turn around - that this "road" had degenerated into a rather precarious dirt path traversing a steep slope.  After driving several hundred more feet, fighting all the while to keep the Jeep on the road and moving in a generally forward direction, the path leveled out once again and rejoined what appeared to be the main road.  After a combination of both swearing at and praising deities we vowed that we would nevertravel on that "road" again and simply stay on what had appeared to have been the main road, regardless of what the signs on the gates said!

Looking for Swasey Peak:

Having passed these trials, we drove along the range's ridge top, looking to the south.  On this day, the air was quite hazy - probably due to wildfires that were burning in California, and in the distance we could vaguely spot, with our naked eyes, the outline of a mountain range that we thought to be the House range:  In comparing its outline and position with a computer-simulated view, it "looked" to be a fairly close match as best as we could guess.

Upon seeing this distant mountain we stopped to get a better look, but when we looked through binoculars or a telescope the distant outline seemed to disappear - only to reappear once again when viewed with the naked eye.  We finally realized what was happening:  Our eyes and brain are "wired" to look at objects, in part, by detecting their outlines, but in this case the haze reduced the contrast considerably.  With the naked eye, the distant mountain was quite small but with the enlarged image in the binoculars and telescope the apparent contrast gradient around the object's outline was greatly diminished.  The trick to being able to visualize the distant mountain turned out be keeping the binoculars moving as our eyes and brain are much more sensitive to slight changes in brightness of moving objects than stationary ones.  After discovering this fact, we noticed with some amusement that the distant mountain seemed to vanish from sight once we stopped wiggling the binoculars only to magically reappear when we moved them again.  For later analysis we also took pictures at this same location and noted the GPS coordinates.

Continuing onwards, we drove along the ridge toward George Peak.  When we got near the GPS coordinates that I had marked for the peak we were somewhat disappointed - but not surprised:  The highest spot in the neighborhood, the peak, was one of several gentle, nondescript hills that rose above the road only by a few 10's of feet.  Stopping, we ate lunch, looked through binoculars and telescopes, took pictures, recorded GPS coordinates, and thought apprehensively about the return trip along the road.
Figure 2:
The predicted line-of-sight view (top) based on 1 arc-second SRTM terrain data between the Raft River range
and Swasey peak as seen from the north (Raft River) side.
On the bottom is an actual photograph of the same scene at the location used in the simulated view.  As can be seen,
more of the distant mountain can be seen than the prediction would indicate, this being due to the refraction of
the atmosphere slightly extending the visible horizon.  Under typical conditions, this "extension" amounts to
an increase of approximately 10/9th of the distance than geometry would predict.  This lower picture was produced
by "stacking" multiple images using software designed for astronomy.
Click on the image for a larger version.

Returning home:

Retracing our path - but not taking the "road" that had paralleled the fence line - we soon came to the gate that marked the boundary of the private land.  While many of the markings were the same at this gate, we noticed another sign - one that had been missing from the other end of the road - indicating that this was, in fact, a public right-of-way plus the admonition that those traveling through must stay on the road.  This sign seemed to register with what we thought we'd remembered about Utah laws governing the use of such roads and our initial interpretation of the county parcel maps:  Always leave a gate the way you found it, and don't go off the road!  With relief, we crossed this parcel with no difficulty and soon found ourselves at the other gate and in familiar territory.

Retracing our steps down the mountain we found ourselves hurtling along the state highway a bit more than an hour later - until I heard the unwelcome sound of a noisy tire.  Quickly pulling over I discovered that a large rock that had embedded itself in the middle of the tread of a rear tire.  After 45 minutes of changing the tire and bringing the spare up to full pressure, we were again underway - but with only one spare remaining...

Analyzing the path:

Upon returning home I was able to analyze the photographs that I had taken.  Fortunately, my digital SLR camera takes pictures in "Raw" image mode, preserving the digital picture without loss caused by converting it to a lossy format like JPEG.  Through considerable contrast enhancement, the "stacking" of several similar images using an astronomical photo processing program and making a comparison against computer-generated view I discovered that the faint outline that we'd seen was not Swasey Peak but was, in fact, a range that was about 25 miles (40km) closer - the Fish Springs mountains - a mere 150 or so miles (240km) away.  Unnoticed (or invisible) at the time of our mountaintop visit was another small bump in the distance that was, in fact, Swasey Peak.

Interestingly, the first set of pictures were taken at a location that, according to the computer analysis, was barely line-of-sight with Swasey Peak.  At the time of the site visit we had assumed that the just-visible mountain that we'd seen in the distance was Swasey Peak and that there was some sort of parallax error in the computer simulation, but analysis revealed that not only was the computer simulation correct in its positioning of the distant features, but also that the apparent height of Swasey Peak above the horizon was being enhanced by atmospheric refraction - a property that the program did not take into account:  Figure 2 shows a comparison between the computer simulation and an actual photograph taken from this same location.


Building confidence - A retry of the 107-mile path:

Having verified to our satisfaction that we could not only get to the top of the Raft River mountains but also that we also had a line-of-sight path to Swasey Peak, we began to plan for our next adventure.  Over the next several weeks we watched the weather and the air - but before we did this, we wanted to try our 107-mile path again in clearer weather to make sure that our gear was working, to gain more experience with its setup and operation, and to see how well it would work over a long optical path given reasonably good seeing conditions:  If we had good success over a 107-mile path we felt confident that we should be able to manage a 173-mile path.

A few weeks later, on September 3, we got our chance:  Taking advantage of clear weather just after a storm front had moved through the area we went back to our respective locations - Ron, Gordon and Elaine at Inspiration Point while I went (with Dale, WB7FID) back to the location near Mt. Nebo.  This time, signal-to-noise ratios were 26dB better than before and voice was "armchair" copy.  Over the several hours of experimentation we were able to transmit not only voice, but SSTV (Slow-Scan Television) images over the LED link - even switching over to using a "raw" Laser Pointer for one experiment and a Laser module collimated by an 8" reflector telescope in another.

With our success on the clear-weather 107-mile path we waited for our window to attempt the 173-mile path between Swasey and George Peak but in the following weeks we were dismayed by the appearance of bad weather and/or frequent haze - some of the latter resulting from the still-burning wildfires around the western U.S.

To be continued!

[End]

This page was stolen from "ka7oei.blogspot.com"

Wednesday, June 21, 2017

Odd differences between two (nearly) identical PV systems

I've had my 18-panel (two groups of 9) PV (solar) electric system in service for about a year and recently I decided to expand it a bit after realizing that I could do so, myself, for roughly $1/watt, after tax incentives.  An so it was done, with a bit of help from a friend of mine who is better at bending conduit than I:  Another inverter and 18 more solar panels were set on the roof - all done using materials and techniques equal to or better than that which was originally done in terms of both quality and safety.

Adding to the old system:

The older inverter, a SunnyBoy SB 5000-TL, is rated for a nominal 5kW and with its 18 panels, 9 of each located on opposite faces of my east/west facing roof (the ridge line precisely oriented to true north-south) would, in real life, produce more than 3900 watts for only an hour or so around "local noon" on late spring/early fall summer days that were both exquisitely clear and very cool (e.g. below 70F, 21C).  I decided that the new inverter need not be a 5kW unit so I chose the newer - and significantly less expensive SunnyBoy SB3.8 - an inverter nominally rated at 3.8kW.  The rated efficiencies of the two inverters were pretty much identical - both in the 97% range.
Figure 1:
The installed 3.8 kW inverter in operation with the 2kW
"SPS" (Secure Power System) island power outlet shown below.
Click on the image for a larger version.

One reason for choosing this lower-power inverter was to stay within the bounds of the rating of my main house distribution panel.  My older inverter, being rated for 5kW was (theoretically) capable of putting 22-25 amps onto the panel's bus, so a 30 amp breaker was used on that branch circuit while the new inverter, capable of about 16 amps needed only a 20 amp breaker.  This combined, theoretical maximum of 50 amps (breaker current ratings, not actual, real-world current from the inverters and their panels!) was within the "120% rule" of my 125 amp distribution panel with its 100 amp main breaker:  120% of 125 amps is 150 amps, so my ability to (theoretically) pull 100 amps from the utility and the combined capacity of the two inverters (again, theoretically - not real-world) being 50 amps was within this rating.

Comment:  The highest total power that power that I have seen from my system has been about 8000 watts - 3900 watts from the SB3.8 and just over 4100 watts from the SB 5000 for a maximum of about 36 amps at 220 volts (abnormally low line voltage!) or about 33 amps total with a more typical 240 volt feed-in - well under the "50 amp" maximum.

For the new panels I installed eighteen 295 watt Solarworld units - a slight upgrade over the older 285 watt Suniva modules already in place. In my calculations I determined that even with the new panels having approximately 3.5% more rated output (e.g. a peak of 5310 watts versus 5130 watts, assuming ideal temperature and illumination - the latter being impossible with the roof angles) that the new inverter would "clip" (e.g. it would hit its maximum output power while the panels were capable of even more power) only a few 10s of days per year - and this would occur for only an hour or so at most on each occasion.  Since the ostensibly "oversized" panel array would be producing commensurately more power at times other than peak as well, I was not concerned about this occasional "clipping".

What was expected:

The two sets of panels, old and new, are located on the same roof with the old array being higher, nearer the ridge line and the new being just below.  In my situation I get a bit of shading in the morning on the east side, and a slight amount in the very late afternoon/evening in mid summer on west side and the geometry of the trees that do this cause the shading of both the new and old systems to be almost identical.

With this in mind, I would have expected the two systems to behave nearly identically.

But they don't!

Differences in produced power:

Having the ability to obtain graphs of each system over the course of a day I was surprised when the production of the two, while similar, showed some interesting differences as the chart below shows. 


Figure 2:
The two systems, with nearly identical PV arrays.  The production of the older SB5000 inverter with the eighteen 285 watt panels is represented by the blue line while the newer SB3.8 inverter with eighteen 295 watt panels is represented by the red line:  Each system has nine east-facing panels and nine west-facing panels.  The dips in the graph are due to loss of solar irradiance due to clouds.  Because the data for this graph is collected every 15 minutes, some of the fine detail is lost so the "dip" in production at about 1:45PM was probably deeper than shown.
The total production of the SB3.8 system (red line) for the day was 27.3kWh while that of the SB5000TL system (blue line) was 25.4kWh - a difference of about 7% overall.
Click on the image for a larger version.
In this graph the blue line is the older SB5000TL inverter and the red line is the newer SB3.8 inverter.  Ideally, one would expect that that the newer inverter, with its 295 watt panels, would be just a few percent higher than the older inverter with its 285 watt panels, but the difference, particularly during the peak hours, is closer to 10%, particularly during the peak times when there is no shading at all.

What might be the cause of this difference?
Figure 3:
 The two parallel east-facing arrays, the older one being closer to
the (north-south) peak of the roof.
Click on the image for a larger version.

Several possible explanations come to mind:
  1. The new panels are producing significantly more than their official ratings.  A few percent would seem likely, but 10%?
  2. The older panels have degraded more than expected in the year that they have been in service.
  3. The two manufacturers rate their panels differently.
  4. There may be thermal differences.  The "new" panels are lower on the roof and it is possible that the air being pulled in from the bottom by convection is cooler when it passes by the new panels, being warmer by the time it gets to the "old" panels.  If we take at face value that 3.5% of the 10% difference is due to the rating - leaving 6.5% difference unaccounted, this would need only about a 16C (39F) average panel temperature difference, but the temperature differences do not appear to be that large!
  5. The new panels don't heat up in the sun as much as the old.  The new panels, in the interstitial gap between individual cells and around the edges are white while the old panels are completely black, possibly reducing the amount of heating.  Again, there doesn't seem to be a 16C (39F) difference.
  6. The new inverter is better at optimizing the power from the panels than the old one.
It's a bit difficult to make absolute measurements, but in the case of #2, the possibility of the "old" panels degrading, I think that I can rule that out.  In comparing the peak production days for 2016 and 2017, both of which occurred in early May (a result of the combination of reasonably long days and cool temperatures) the best peak was about the same - approximately 28.25kWh on the "old" system even after I'd installed the "new" panels on the east side.

I suspect that it is a combination of several of the above factors, probably excluding #2, but I have no real way of knowing the amount of contribution of each.  What is surprising to me is that I have yet to see any obvious clipping on the new system on the production graphs even though I have "caught" it pegged at about 3920 watts on several occasions during local noon, so it seems that my calculation of "several dozen of hours" per year where this might happen is about right.

I'll continue to monitor the absolute and relative performance of the two sets of panels to see how they track over time.

[End]

This page stolen from "ka7oei.blogspot.com"

Tuesday, June 13, 2017

Adding a useful signal strength indication to an old, inexpensive handie-talkie for transmitter hunting

A field strength meter is a very handy tool for locating a transmitter, but a sensitive field strength meter by itself has some limitations:  It will respond to practically any RF signal that enters its input.  This property has the effect of limiting the effective sensitivity of the field strength meter, as any nearby RF source (or even ones far away, if the meter is sensitive enough...) will effectively mask the desired signal if it is weaker than these "background" signals.
Figure 1:
The modified Icom IC-2A/T HT with a broadband
field strength meter paired with the AD8307-based field
strength meter mentioned and linked in the article, below.
Click on the image for a larger version.

This property can be mitigated somewhat by preceding the input of the meter with a simple tuned RF stage and, in most cases, this is adequate for finding (very) nearby transmitters.  A simple tuned circuit does have its limitations:
  • It is only broadly selective.  A simple, single-tuned filter will have a response encompassing several percent (at best) of the operating frequency.  This means that a 2 meter filter will respond to nearly any signal near or within to the 2 meter band.
  • A very narrow filter can be tricky to tune.  This isn't usually too much of a problem as one can peak on the desired signal (if it is close enough to register) or use your own transmitter (on the same or nearby frequency) to provide a source of signal on which the filter may be tuned.
  • The filter does not usually enhance the absolute (weak signal) sensitivity unless an amplifier is used.
An obvious approach to solving this problem is to use a receiver, but while many FM receivers have "S-meters" on them, very few of them have meters that are truly useful over a very wide dynamic range, most firmly "pegging" even on relatively modest signals, making them nearly unusable if the signal is any stronger than "medium weak".  While an adjustable attenuator (such as a step attenuator or offset attenuator) may be used, the range of the radio's S-meter itself may be so limited that it is difficult to manage the observation of the meter and adjusting the signal level to maintain an "on-scale" reading.

Another possibility is to modify an existing receiver so that an external signal level meter with much greater range may be connected.

Picking a receiver:

When I decided to take this approach I began looking for a 2 meter (the primary band of interest) receiver with these properties:
  • It had to be cheap.  No need to explain this one!
  • It had to be synthesized.  It's very helpful to be able to change frequencies.
  • Having a 10.7 MHz IF was preferable.  The reasons for this will become apparent.
  • It had to have enough room inside it to allow the addition of some extra circuitry to allow "picking off" the IF signal.  After all, that's the entire point of this exercise.
  • It had to be easy to use.  Because one may not use this receiver too often, it's best not to pick something overly complicated and would require a manual to remind one how to do even the simplest of tasks.
  • The radio would still be a radio.  Another goal of the modification was that the radio had to work exactly as it was originally designed after you were done - that is, you could still use it as a transceiver!
Based on a combination of past familiarity with various 2 meter HTs and looking at prices on Ebay, at least three possibilities sprang to mind:
  • The Henry Tempo S-1.  This is a very basic 2 meter-only radio and was the very first synthesized HT available in the U.S.  One disadvantage is that, by default, it uses a threaded antenna connection rather than a more-standard BNC connector and would thus require the user to install one to allow it to be used with other types of antennas.  Another disadvantage is that it has a built-in non-removable battery.  It's power supply voltage is limited to under 11 volts.  (The later Tempo S-15 has fewer of these disadvantages and may be better, but I am not too familiar with it.)
  • The Kenwood TH-21.  This, too, is a very basic 2 meter-only radio.  It uses a strange RCA (e.g. phono) like threaded connector, but this mates with easily-available RCA-BNC adapters.  Its disadvantage is that it is small enough that the added circuitry may not fit inside.  It, too, has a distinct limitation on its power supply voltage range and requires about 10 volts.
  • The Icom IC-2A/T.  This basic radio was, at one time, one of the most popular 2 meter HTs which means that there are still plenty of them around.  It can operate directly on 12 volts, has a standard BNC antenna connector, and has plenty of room inside the case for the addition of a small circuit.  (The "T" suffix indicates that it has a DTMF numeric keypad.  The "non-T" version such as the IC-2A is a bit less common, but would work just fine for this application.)
Each of these radios is a thumbwheel-switch tuned, synthesized, plain-vanilla radio. I chose the Icom IC-2AT (it is also the most common) and obtained one on Ebay for about $40 (including accessories) and another $24 bought a clone of an IC-8, an 8-cell alkaline battery holder (from Batteries America) that is normally populated with 2.5 amp-hour NiMH AA cells.  With its squelched receive current of around 20 milliamps I will often use this radio as a "listen around the house" radio since it will run for days and days!

"Why not use one of those cheap chinese radios?"

Upon reading this you may be thinking "why spend $$$ on an ancient radio when you can buy a cheap chinese radio that has lots of features for $30-ish?"

The reason is that these radios have neither a user-available "S" meter with good dynamic range or an accessible IF (Intermediate Frequency) stage.  Because these radios are, in effect, direct conversion with DSP magic occurring on-chip, there is absolutely nowhere that one could connect an external meter - because that signal simply does not exist!

While many of these "single-chip" radios do have some built-in S-meter circuitry, the manufacturers of these radios have, for whatever reason, not made it available to the user - at least not in a format that would be particularly useful for transmitter hunting.
Modifying the IC-2A/T (and circuit descriptions):

This radio is the largest of those mentioned above and has a reasonable amount of extra room inside its case for the addition of the few small circuits needed to complete the modification.  When done, this modification does not, in any way, affect otherwise normal operation of the radio:  It can still be used as it was intended!

An added IF buffer amplifier:

This radio uses the Motorola MC3357 (or an equivalent such as the MP5071) as the IF/demodulator.  This chip takes the 10.7 MHz IF from the front-end mixer and 1st IF amplifier stages and converts it to a lower IF (455 kHz) for further filtering and limiting and it is then demodulated using a quarature detector.  Unfortunately, the MC3357 lacks an RSSI (Receive Signal Strength Indicator) circuit - which partly explains why this radio doesn't have an S-meter, anyway.  Since we were planning to feed a sample of the IF from this receiver into our field strength meter, anyway, this isn't too much of a problem.

Figure 2:
The source-follower amplifier tacked atop the IF amplifier chip.
Click on the image for a larger version.
We actually have a choice to two different IFs:  10.7 MHz and 455 kHz.  At first glance, the 455 kHz might seem to be a better choice as it has already been amplified and it is at a lower frequency - but there's a problem:  It compresses easily.  Monitoring the 455 kHz line, one can easily "see" signals in the microvolt range, but by the time you get a signal that's in the -60 dBm range or so, this signal path is already starting to go into compression.  This is a serious problem as -60 dBm is about the strength that one gets from a 100 watt 2 meter transmitter that is clear line-of-sight at about 20 miles (about 30km) distant, using unity-gain antennas on each end.  What this means is that if we were to use this signal tap, we might still be a fair distance away from the transmitter when the signal peaked.

The other choice is to tap the signal at the 10.7 MHz point, before it goes into the MC3357.  This signal, not having been amplified as much as the 455 kHz signal, does not begin to saturate until the input reaches about -40 dBm or so, reaching full saturation by about -35 dBm.  Given our example, above, -35 to -40dBm is roughly equivalent to a line-of-sight 100 watt 2 meter transmitter at 1-3 miles (approx. 1.6-5km) - which means that we'll get much closer before the signal path saturates - but we can easily deal with that as we'll discuss shortly.

One point of concern here was the fact that at this point, the signal has less filtering than the 455 kHz, with the latter going through a "sharper" bandpass filter.  While the filtering at 10.7 MHz is a bit broader, the 4 poles of crystal filter do attenuate  a signal 20 kHz away by at least 30 dB - so unless there's another very strong signal on this adjacent channel, it's not likely that there will be a problem.  As it turns out, the slightly "broader" response of the 10.7 MHz crystal filters is conducive to "offset tuning" - that is, deliberately tuning the radio off-frequency to reduce the signal level reading when you are nearby the transmitter being sought.


To be able to tap this signal without otherwise affecting the performance of the receive requires a simple buffer amplifier, and a JFET source-follower does the job nicely (see figure 6, below for the diagram).  Consisting of only 6 components (two resistors, three capacitors and an MPF102 JFET - practically any N-channel JFET will do) this circuit is simply tack-soldered directly onto the MC3357 as shown in figures 2 and 3.  This circuit very effectively isolates the (more or less) 50 ohm output load of the field strength meter from the high-impedance 10.7 MHz input to the MC3357 and it does so while only drawing about 700 microamps, which is only 3-4% of the radio's total current when it is squelched.

Figure 3:
A wider view of the modifications to the radio.
Click on the image for a larger version.
As can be seen from the pictures (figure 2 and 3) all of the required connections were made directly to the pins of the IC itself, with the 330 pF input capacitor connecting directly to pin 16.  The supply voltage is pulled from pin 4, and pins 12 and/or 15 are used for the ground connection. 

A word of warning:  Care should be taken when soldering directly to the pins of this (or any) IC to avoid damage.  It is a good idea to scrape the pin clean of oxide and use a hot soldering iron so that the connection can be made very quickly.  Excess heat and/or force on the pin can destroy the IC!  It's not that this particular IC is fragile, but this is care that should be taken.

Getting the IF signal outside the radio:

The next challenge was getting our sampled 10.7 MHz IF energy out of the radio's case.  While it may be possible to install another connector on the radio somewhere, it's easiest to use an existing connector - such as the microphone jack.

One of the goals of these modifications was to retain complete function of the radio as if it were a stock radio, so I wanted to be sure that the microphone jack would still work as designed, so I needed to multiplex both the microphone audio (and keying) and the IF onto the tip of the microphone connector as I wasn't really planning to use the signal meter and a remote microphone at the same time.  Because of the very large difference in frequencies (audio versus 10.7 MHz) it is very easy to separate the two using capacitors and an inductor:  The 10.7 MHz IF signal is passed directly to the connector with the series capacitor while the 10.7 MHz IF signal is blocked from the radio's internal microphone/PTT line with a small choke:  Anything from 4.7uH to 100uH will work fine.
Figure 4:
The modifications at the microphone jack.
Click on the image for a larger version.

The buffered IF signal is conducted to the microphone jack using some small coaxial cable:  RG-174 type will work, but I found some slightly smaller coax in a junked VCR.  To make the connections, the two screws on the side of the HT's frame were removed, allowing it to "hinge" open, giving easy access to the microphone connector.  The existing microphone wire connected to the "tip" connection was removed and the choke was placed in series with it, with the combination insulated with some heat-shrinkable tubing.

The coax from the buffer amp was then connected directly to the "tip" of the microphone connector.  One possible coax routing is shown in Figure 4 but note that this routing prevents the two halves of the chassis from being fully opened in the future unless it is disconnected from one end.  If this bothers you, a longer cable can be routed so that it follows along the hinge and then over to the buffer circuit.  Note:  It is important to use shielded cable for this connection as the cable is likely to be routed past the components "earlier" in the IF strip and instability could result if there is coupling.

Interfacing with the Field Strength meter:

Using RG-174 type coaxial cable, an adapter/interface cable was constructed with a 2.5mm connector on one end and a BNC on the other.  One important point is that a small series capacitor (0.001uF) is required in this line somewhere as a DC block on the microphone connector:  The IC-2A/T (like most HTs) detects a "key down" condition on the microphone by detecting a current flow on the microphone line and this series capacitor prevents current from flowing through the 50 ohm input termination on the field strength meter and "keying" the radio.

Dealing with L.O. leakage:

As soon as it was constructed I observed that even with no signal, the field strength meter showed a weak signal (about -60 to -65 dBm) present whenever the receiver was turned on, effectively reducing sensitivity by 20-25 dB.  As I suspected when I first noticed it, this signal was coming from two places:
  • The VHF local oscillator.  On the IC-2A/T, this oscillator operates 10.7 MHz lower than the receive frequency.  In other words, tuned to 146.520 MHz, the local oscillator is running at 135.82 MHz.
  • The 2nd IF local oscillator.  On the IC-2A/T this oscillator operates at 10.245 MHz - 455 kHz below the 10.7 MHz IF as part of the conversion to the second IF.
The magnitude of each of these signals was about the same, roughly -65 dBm or so.  The VHF local oscillator would be very easy to get rid of -  A very simple lowpass filter (consisting of a single capacitor and inductor) would adequately suppress it - but the 10.245 MHz signal poses a problem as it is too close to 10.7 MHz to be easily attenuated enough by a very simple L/C filter without affecting it.

Figure 5:
The inline 10.7 MHz bandpass using filter using a ceramic
filter.  The diagram for this may be seen in the upper-right
corner of Figure 6, below.
Click on the image for a larger version.
Fortunately, with the IF being 10.7 MHz, we have another (cheap!) option:  A 10.7 MHz ceramic IF filter.  These filters are ubiquitous, being used in nearly every FM broadcast receiver made since the 80s, so if you have a junked FM broadcast receiver kicking around, you'll likely have one or more of these in them.  Even if you don't have junk with a ceramic filter in it, they are relatively cheap ($1-$2) and readily available from many mail-order outlets.  This filter is shown in the upper-right corner of the diagram in Figure 6, below.

The precise type of filter is not important as they will typically have a bandpass that is between 150 kHz and 300 kHz wide (depending on the application) at their -6 dB points and will easily attenuate the 10.245 MHz local oscillator signal by at least 30 dB.  With this bandwidth it is possible to use a 10.7 MHz filter (which, themselves, vary in exact center frequency) for some of the "close - but not exact" IF's that one can often find near 10.7 MHz like 10.695 or 10.75 MHz.  The only "gotcha" with these ceramic filters is that their input/output impedances are typically in the 300 ohm area and require a (very simple) matching network (an inductor and capacitor) on the input and output to interface them with a 50 ohm system.  The values used for matching are not critical and the inductor, ideally around 1.8uH, could be anything from 1.5 to 2.2 uH without much impact of performance other than a very slight change in insertion loss.

While this filter could have been crammed into the radio, I was concerned that the L.O. leakage might find its way into the connector somehow, bypassing the filter.  Instead, this circuit was constructed "dead bug" on a small scrap of circuit board material with sides, "potted" in thermoset ("hot melt") glue and covered with electrical tape, heat shrink tubing or "plastic dip" compound, with the entire circuit installed in the middle of the coax line (making a "lump.")  Alternatively, this filter could have been installed within the field strength meter itself, either on its own connector or sharing the main connector and being switchable in/out of the circuit.

Figure 6:
The diagram, drawn in the 1980s Icom style, showing the modified circuity and details of the added source-follower JFET amplifier (in the dashed-line box) along with the 10.7 MHz bandpass filter (upper-right) that is built into the cable.
Click on the image for a larger version.
With this additional filtering the L.O. leakage is reduced to a level below the detection threshold of the field strength meter, allowing sub-microvolt signals to be detected by the meter/radio combination.

Operation and use:

When using this system, I simply clip the radio to my belt and adjust it so that I can listen to what is going on.

There's approximately 30 dB of processing gain between the antenna to the 10.7 MHz IF output - that is, a -100 dBm signal on the antenna on 2 meters will show up as a -70 dBm signal at 10.7 MHz.  What this means is that sub-microvolt signals are just detectable at the bottom end of the range of the Field Strength meter.  From a distance, a simple gain antenna such as a 3-element "Tape Measure Yagi" (see the article "Tape Measure Beam Optimized for Direction Finding - link) will establish a bearing, the antenna's gain providing both an effective signal boost of about 7dB (compared to an isotropic) and directivity.

While driving about looking for a signal I use a multi-antenna (so-called) "Doppler" type system with four antennas being electrically rotated to get the general bearings with the modified IC-2AT being the receiver in that system.  With the field strength meter connected I can hear its audio tone representing the signal strength without need to look at it.  As I near the signal source and the strength increases, I have both the directional indication and the rising pitch of the tone as dual confirmation that I am approaching it.

The major advantage of using the HT as tunable "front end" of the field strength meter means that the meter has greatly enhanced selectability and sensitivity - but this is not without cost:  As noted before, this detection system will begin to saturate at about -40 dBm, fully saturating above -35 dBm - which is a "moderately strong" signal.  In "hidden-T" terms, it will "peg" when within a hundred feet or so of a 100 mW transmitter with a mediocre antenna.

When the signals become this strong, you can do one of several things:
  • Detune the receiver by 5, 10, 15 or even 20 kHz.  This will reduce the sensitivity by moving the signal slightly out of the passband of the 10.7 MHz IF filters.  This is usually a very simple and effective technique, although heavy modulation can cause the signal strength readings to vary.
  • Add attenuation to the front-end of the receiver.  The plastic case of the IC-2A/T is quite "leaky" in terms of RF ingress, but it is good enough for a step attenuator on the antenna lead to work nicely and will thus extend usable range to at least -10dBm dBm.  I use a switchable step attenuator for this and I have found that I can drive to the location (house, yard, park) where the transmitter is located and still have sufficient adjustment range.
  • When you are really close (e.g. 10s of yards/meters) to the transmitter being sought you can forgo the receiver altogether, connecting the antenna directly to the field strength meter!
If you want to be really fancy, you can build the 10.7 MHz bandpass filter and add switches to the field strength meter so that you can switch the 20 dB of attenuation in and out as well as routing the signal either to the receiver, or to the field strength meter using a resistive or hybrid splitter to make sure that the receiver gets some signal from the antenna even when the field strength meter is connected to the antenna.

What to use as the field-strength meter:

The field strength meter used is one based on the Analog Devices AD8307 which is useful from below 1 MHz to over 500 MHz, providing a nice, logarithmic output over a range that goes below -70dBm to above +10dBm.  It is, however, broad as the proverbial "barn door" and the combination of this fact and that its sensitivity of "only" -70dBm is nowhere near enough to be useful with weak signals - especially if there are any other radio transmitters nearby - including radio and TV stations within a few 10s of miles/kilometers.  The integration of this broadband detector with the narrowband, tuneable receiver IF along with its gain makes for a complete system useful for signals that range from weak to strong.

The description of an audible field-strength meter may be found on the web page of the Utah Amateur Radio club in another article that I wrote, linked here:  Wide Dynamic Range Field Strength Meter - link.  One of the key elements of this circuit is that it includes an audio oscillator with a pitch that increases in proportion with the dB indication on the meter, allowing "eyes-off" assessment of the signal strength - very useful while one is walking about or in a vehicle.

There are also other web pages that describe the construction of an AD8307-based field strength meter (look for the "W7ZOI" power meter as a basis for this type of circuit) - and you can even buy pre-assembled boards on EvilBay (search on "AD8307 field strength meter").  The downside of most of these is that they do not include an audible signal strength indication to allow "eyes off" use, but this circuit could be easily added, adapted from that in the link above.

Another circuit worth considering is the venerable NE/SA605 or 615 which is, itself, a stand-alone receiver.  Of interest in this application is its "RSSI" (Receive Signal Strength Indicator) circuit which has both good sensitivity, is perfectly suited for use at 10.7 MHz,  has a nice logarithmic response and a wide dynamic range - nearly as much as the AD8307.  Exactly how one would use just the RSSI pin of this chip is beyond the scope of this article, but information on doing this may be found on the web in articles such as:
  • NXP Application note AN1996 - link (see figure 13, page 19 for an example using the RSSI function only)

Additional comments:
  • At first, I considered using the earphone jack for interfacing to the 10.7 MHz IF, but quickly realized that this would complicate things if I wanted to connect something to the jack (such as pair of headphones or a Doppler unit!) while DFing.  I decided that I was unlikely to be needing to use an external microphone while I was looking for a transmitter!
  • I haven't tried it, but these modifications should be possible with the 222 MHz and 440 MHz versions (IC3 and IC4) of this radio - not to mention other radios of this type.
  • Although not extremely stable, you can listen to SSB and CW transmissions with the modified IC-2A/T by connecting a general-coverage/HF receiver to the 10.7 MHz IF output and tuning +/- 10.7 MHz.  Signals may be slightly "warbly" - but they should be easily copyable!
Finally, if you aren't able to build such a system and/or don't mind spending the money and you are interested in what is possibly the best receiver/signal strength meter combination device available, look at the VK3YNG Foxhunt Sniffer - link.  This integrates a 2 meter receiver (also capable of tuning the 121.5 MHz "ELT" frequency range) and a signal strength indicator capable of registering from less than -120dBm to well over +10dBm with an audible tone.

Comment:  This article is an edited/updated version of one that I posted on the Utah Amateur Radio Club site (link) a while ago.


[End]

This page stolen from "ka7oei.blogspot.com"