Tuesday, October 17, 2017

A 10 MHz OCXO (Oven-controlled Crystal Oscillator)

Figure 1:
The 10 MHz OCXO (lower right) in use with my homebrew
24 GHz transverter.  At 24 GHz, the oven provides excellent frequency
stability, suitable for SSB or even digital modes, while providing a
frequency uncertainty of a few hundred Hz at most.
Click on the image for a larger version.
Why a frequency reference?

When operating on the microwave amateur radio bands, narrowband modes (such as SSB or CW) are often used to maximize the link margin - that is, to be able to talk when signals are weak - and when we use microwave frequencies and narrowband modes such as SSB or CW one must maintain pretty good frequency stability and accuracy:

  • Stability is important as a drift of even a few hundred Hz at the operating frequency (in the GHz range!) can affect intelligibility of voice - or, if CW is being used for weak-signal work, such drifting can move the received signal outside the receiver's passband filter!  Having to "chase" the frequency around is not only distracting, but it complicates being able to communicate in the first place.
  • Accuracy is also important because it is important that both parties be confident that their operating frequencies are reasonably close.  If a contact is arranged beforehand it is vital that both parties be able to find each other simply by knowing the intended frequency of communication and as long as the two parties are within several hundred Hz of each other it is likely that they will be able to find each other if the path "works" in the first place.  If the error was on the order of several kHz, "hunting" would be required to find the signal and if those signals are weak, they may be missed entirely.
Because achieving such stability and accuracy requires some effort, it is more convenient if our gear is constructed such that it can use a common, external frequency reference and lock to it:  In that way, we need only have one "master" reference rather than several individual references.

Figure 2:
The 10 MHz Isotemp 134-10 OCXO - one of many similar units that
often show up on EvilBay.  A 200uF, 16 volt capacitor is soldered
directly to the supply terminals of the OCXO to provide low-impedance
filtering of any noise that might appear on it - any value from 2000 and
up (to several thousand uF) would be just fine.  The green device is a 10-turn
trimmer potentiometer soldered directly to the OCXO's pins.  This device is
used to adjust the tuning voltage to precisely set the frequency and locating it
at the OCXO practically eliminates the possibility of external noise pick-up
on the tuning lines.  The OCXO is mounted in the case using rubber/metal
shock mounts with "blobs" of RTV (silicone) on the sides that prevent
it from hitting the inside of the box should the unit be accidentally dropped.
The corners/edges of the OCXO could be mounted in some stiff foam,
instead - but it should not be thermally insulated by this foam unless you have
demonstrated to yourself that doing so will not reduce the oven's stability.
Click on the image for a larger version.
Having one common frequency reference can also be convenient if one is operating portable using battery power since it can mean that one doesn't need to keep all of those individual pieces of gear "warmed up" all of the time to maintain stability.  If a particular piece of gear can accept an external 10 MHz input, this would allow one to turn on that gear (and drain battery power) only when it is needed.

At this point I might mention that Rubidium frequency references (such as one described here) are also readily available in the surplus market as well that provide at least an order or magnitude greater accuracy and stability and warm up in less time than the crystal reference, so why not always use a Rubidium reference instead of a crystal-based one?  The crystal-based unit is cheaper, easier to package and consume significantly less power than a Rubidium reference, and the stability/accuracy of a good-quality crystal-based reference is more than "good enough" at least through 24 GHz!  When I go out in the field to do portable microwave work, I'll often power up the OCXO after putting it in the car knowing that by the time that I get to my destination and set up, it will be warm and on-frequency.  (To be sure, I bring the Rubidium as a "backup" reference!)

About this frequency reference:

The oscillator:

The goal for this project was to have a "reasonably stable and accurate" reference:  Based on an Isotemp OCXO 134-10, this unit seems to be able to hold the 24 GHz local oscillator to within 500 Hz or better once it has had 15-20 minutes or so to warm up -  and it seems to be fairly stable across a range ambient temperatures from "hot" to "below freezing."  The Isotemp unit - and others like it - are readily available on both the new and surplus markets, available via EvilBay and similar.

The oven module itself is rated to operate from 13 volts, +/- 2 volts, implying a minimum of 11.0 volts.  Even though testing indicated that it seemed to be "happy" with a supply voltage as low as 9.8 volts or so, it was decided to adhere to the published specifications and in looking around I noticed that most readily-available low-dropout regulators (and those that I had onhand) were not specified to handle the maximum "cold" current of this oven - about 800 mA or so - so I had to "roll my own" 11 volt "zero-dropout" regulator.  More on alternative regulators, below.
Figure 3:
The inside of the enclosure containing the OCXO, regulator and driver.
On the left is the shock-mounted OCXO while the circuit on the perfboard
is the "zero drop-out" regulator and the 10 MHz distribution amplifier.
The P-channel FET pass transistor can be seen along the top edge of
the die-cast enclosure, bolted to it to dissipate any heat while along
the right edge, inside the enclosure is a piece of glass-epoxy circuit
board material to provide a solid, solderable ground plane for the
distribution outputs and the DC input filtering.

A "zero-dropout" regulator:

Why regulate?  I noted in testing that slight variations of supply voltage (a few hundred millivolts) would cause measurable disturbances in the oscillator frequency due to the changes of the power applied to the heater, taking several minutes to again reach equilibrium.  Since battery operation was anticipated it is expected that the supply voltage would change frequently - between periods of transmit and receive - as well as due to normal battery discharge.

Referring to the schematic, U101 - a standard 5 volt regulator (the lower-power 78L05 is a good choice) provides a stable voltage reference for U103, a 741 op amp, which is used as an error amplifier.  A 7805 was chosen as it is readily-available but a Zener diode and resistor could have been chosen:  If a Zener is used, a 5.6-6.2 volt unit is recommended with 2-5 milliamps of bias as this voltage range offers good temperature stability.

If the output voltage is too low, the voltage on pin 3 drops, along with the pin 6, the op amp's output which turns on Q103, a P-Channel power MOSFET, which increases the voltage and once the voltage on the wiper of R119 reaches 5 volts - that of the reference - the circuit comes to equilibrium.  A P-Channel FET (a less-common device than an N-channel) was used because it takes 3-5 volts of drain-gate voltage to turn on a FET and it would have been necessary to have at least 16 volt supply to bias the gate if an N-Channel FET were used.  Furthermore, with the use of a P-Channel power MOSFET the dropout voltage of the regulator is essentially limited to the channel resistance of the that FET.  In theory a PNP (possibly a complimentary pair arrangement) could be used instead if one can tolerate closer to a volt of dropout, but the FET was chosen to minimize the dropout voltage.

In testing, once the oven was warm (a condition in which the OCXO was drawing approximately 250 mA at normal "room temperature") the dropout of the regulator was approximately 50 millivolts - a voltage drop that is likely to be comparable that of the resistance of the wires used to power the unit.  This rather simple regulator seems to work quite well, holding the output voltage steady to within a few millivolts over the input voltage range of 11.1 to 17 volts with good transient response.
Figure 4:
The end panel of the OCXO module.  The power feedthrough/capacitor
is on the left, obscured by the red/white power cable with the yellow-ish
"ready" light to the right of it.  The three BNC connectors are the 10 MHz
outputs, allowing multiple devices to be connected while in use and/or while
its calibration is being checked.
Click on the image for a larger version.

"Faster warmup" feature:

This OCXO has a "status" output that, when "cold", outputs about 0 volts and in this state, Q101 is turned off, allowing R112 and R113/D102 to pull its collector high - turning on Q102 - which pulls the gate of Q103 low through R118, turning it fully "on."  In this state the voltage applied to the oven is nearly that of the battery supply and this higher voltage increases the power applied to the oven, allowing it to heat more quickly.  Once the oven's "status" line goes high, Q101 is turned on, illuminating the LED and turning off Q102, allowing the regulator to operate normally.

Note:  When the unit is warming up, the OCXO's voltage is unregulated which means that the supply should be kept below 15.0 volts to stay within the "safe zone" of the ratings of the oscillator itself.

Does the "boosted" voltage actually help the oven warm up faster?  Probably only a little bit, but it took only 4 additional components to add this feature!

Status indicator:

It should be noted that this status line doesn't indicate that the oven has fully warmed up, but that it's still warming:  At "room temperature" it takes at least another 5 minutes before the frequency will be stable enough for use and another 5 minutes or so after that until it's "pretty close" to the intended frequency and it can be used at microwave frequencies without having to chase people around.  Why have the indicator light if it doesn't indicate that the unit is "ready"?

In other words:  If the light isn't on, you can be sure that the frequency output won't be valid for one reason or another.

Because the OCXO itself is somewhat load-sensitive, U102 - an LM7171 - is used as a distribution amplifier to both isolate the oven from its loads and to provide fan-out to allow multiple outputs to be driven simultaneously.  The LM7171, a high-output, high-speed op amp, is configured for a gain of 2, providing about 2 volts peak-to-peak output with the drive provided by the OCXO.

Mounting the oven:

Because this unit is intended to be used "in the field" it was decided to mount the OCXO module itself to prevent mechanical shock from affecting the reliability, frequency stability and accuracy and this was done using some rubberized mounting pillars from scrapped satellite equipment while some "blobs" of silicone were placed on the wall of the die-cast enclosure to prevent the OCXO housing itself from directly impacting it should the unit be accidentally dropped.

Figure 5:
Schematic of the OCXO-based unit, including the zero-dropout regulator and 10 MHz distribution amplifier.
Click on the image for a larger version.
A few bits of stiff foam could also be used to provide some shock mounting in the corners of the OCXO but be aware that some oven-based oscillators have been known to become less accurate and stable if they are over-insulated, so don't go overboard.


Like any crystal oscillator, it is somewhat "position sensitive" in that a frequency shift of a hundred Hz or so (at 24 GHz) can be observed if the unit is placed on its side, upside-down, etc. due to the effect of gravity on the quartz crystal itself.  While this effect is very minor, it's worth noting when it's being set to frequency and in operation.

In other words, when you calibrate it, do so in the same physical orientation that it will be when it is in use.

DC input protection and filtering:

Finally, the input supply is RF-bypassed using a feedthrough capacitor to prevent the ingress or egress of extraneous RF along the power lead.   For power-supply short-circuit and reverse-polarity protection, R101, a 1.1 amp, self-resetting PTC fuse is used in conjunction with D101, a 3-amp diode.

Why not use a forward-biased diode for reverse-polarity protection?  If you recall, we are going through the trouble of minimizing voltage drop-out with our "special" voltage regulator and we could wreck this effort if we inserted something that caused a voltage drop - even the 0.3-ish volts of a Shottky diode would undermine this.

By using the "reverse-biased diode" and the self-resetting PTC fuse we get:

  • A means of current limiting should something to wrong:  If we accidentally short something out, the fuse resets itself when the fault is cleared - and no need to worry about not having a spare fuse when one is out in the hinterland trying to operate!
  • If the polarity is somehow connected backwards, the diode will conduct and the PTC fuse will "open" - no harm done, once the fault is rectified.
  • Of course, there is negligible voltage drop related to the fuse:  The fuse's resistance is a fraction of an Ohm under normal conditions.  In this manner we don't compromise the voltage "headroom" of a 12-volt lead-acid battery.

The best way to calibrate this device is to use a GPS disciplined oscillator or a known-good rubidium frequency reference.  If you have access to one of these, connect the output of the OCXO to one channel of a dual-trace oscilloscope and the frequency reference to the other, triggering on one of two signals - it really doesn't matter which one.

Note:  If you have an analog dual-trace oscilloscope with sufficient bandwidth you can use the "X/Y" mode to produce a Lissajous pattern (obligatory Wikipedia reference here) - but this doesn't always work well on modern, digital scopes when high frequencies are involved due to sample aliasing.

Adjusting the 'scope to see one of the waveforms, one should see a stationary wave (the one on which the 'scope is triggered) while the other will be "sliding" past the first.  Adjust the OCXO's frequency (after the OCXO has warmed up for at least 30 minutes - preferably more) while it is sitting in the same physical orientation in which it will be used as this can (slightly) affect frequency.

The OCXO's frequency is then adjusted to minimize the rate at which the two sine waves are moving with respect to each other:  It's sometimes easier to make this adjustment if the 'scope is adjusted to that the two waves are atop each other and about the same size.  With careful adjustment it should be possible to get two waveforms that take more than 10 seconds to "slide" past each other - maybe longer.  The Isotemp OCXO noted should, in theory, be able to hold to that "10 second" slide rate over a wide variety of temperature conditions.

If you don't happen to have access to a rubidium reference or a GPS Disciplined oscillator, you can do "reasonably" well by zero-beating the 10 MHz output with the signal from WWV or WWVH, be note that Doppler shifts can cause their apparent frequencies to shift by 1 Hz or more.  I'll leave the explanation of methods of successfully zero-beating an off-air signal to others on the GoogleWeb.

The best time to attempt this is when you are hearing only one of these two stations (assuming that you can ever hear them both) and when it's signal is the most "solid" - that is, it's fading in and out is at minimum.  Often, the worst time to make this sort of measurement is when any part of the radio path between you and WWV (or WWVH) is within a hour or two of sunrise or sunset as this is when the ionospheric layers are in a state of flux.  If you are hearing both WWV and WWVH, don't try this as the two frequencies and signal strength will not likely be consistent and the results will probably be confusing.

If you don't happen to live in an area where you have a reasonable signal from WWV or WWVH then I'd suggest you ask around to find someone who has appropriate gear to help with this task.
 Comment about alternative schemes for low-dropout regulation for the OCXO:

Since this web page was originally put together a number of "low-dropout" adjustable regulator ICs have appeared on the market that may be suitable for your this project - but there are a few caveats.

For example, there is the Linear Technologies LT1086-Adj which is rated for up to 1.5 amps of current.  While much lower dropout than a conventional adjustable regulator such as an LM317, it does have approximately 1 volt of dropout which means that if you set the OCXO's supply voltage to 11.0 volts - the minimum recommended in the OCXO's specification - your battery voltage must be at least 12.0 volts:  While this represents a lead-acid battery that mostly depleted it is likely that a small, but healthy, lead acid could drop to such a voltage under transmit load - particularly if the resistance of power leads is taken into account.  This 3-terminal regulator is used in a manner very similar to the LM317 - except that you really must have some good quality, low-ESR capacitors (probably tantalum) very close to the regulator itself - see the data sheet.

Also made by Linear Technologies is the LT1528 that is rated for up to 3 amps that has a (nominal) 0.6 volts of dropout - more typically in the 0.3 to 0.5 volt area for the amount of current consumed by the OCXO, particularly once it has warmed up:  This extra margin would keep one in the "safe" region of the OCXO's operating voltage range down to around 11.5 volts allowing both "deeper" battery discharge and more voltage drop on connecting wires.  This part is somewhat more complicated to use than the LT1086, above, but it is, overall, simpler than the op-amp based regulator described earlier in this page.

If the "fast warmup" were to be implemented on either of the above regulators it would take a different form than the above - likely using several resistors and a transistor or two to "switch" the resistor-programmed voltage setting to something higher than the normal voltage.

There are a number of other, similar, low-dropout regulators that are made by different manufacturers, but very few have as low dropout as the FET/Op-amp circuit described on this page!

Additional comments:
  • It is recommended that one not use a switching regulator to power the OCXO unless it has been extremely well filtered and bypassed.
  • If you are interested in an example of this project being built with an etched PC board with surface-mount parts, visit VK4ABC's 10 MHz OCXO Web Page.

* * *

This is a revised version of one of my web pages, the original being found at http://www.ka7oei.com/10gig/10meg_oven_1.html


This page stolen from ka7oei.blogspot.com

 Note:  This post is partially an attempt to test means of reducing the "scraping" of content of this blog by sites such as "rssing", who seem to "swipe" content and "load" search engines' result with unwary readers NOT ending up at my page.     xe2XV6SJ9914C50H08S8  QY2IU7TU0C11c57804Q8

xe2XV6SJ9914C50H08S8 QY2IU7TU0C11c57804Q8

Thursday, October 5, 2017

Fixing my failed "Kill-A-Watt" meter - and a bit about capacitive dropper power supplies.

A few days ago I had the need to measure the load of an appliance, so I dug out my "Kill-A-Watt" power meter.  The purpose of this device is to measure not only the load of the appliance in watts, but it will also measure the line voltage, frequency, and provide a running total of consumed power over time in kilowatt-hours (kWh).  Usefully, this device will also measure things like power factor and volt-amps (Vars) - both things that can be useful in determining how much actual load something may be putting on your generator.  (For more info on power factor and volt-amps, read the Wikipedia article linked here.)
Figure 1:
147 volts on the power mains?  I don't think so!
Click on the image for a larger version.

I was both surprised and annoyed when I plugged in the unit and it informed me that the line voltage was about 147 volts so I grabbed a voltmeter and found it to be a more reasonable 123 volts - a typical voltage in the U.S. for the circuits powering lower-power household devices:  Something was wrong!  As I looked at the Kill-A-Watt's display I noticed something else:  It seemed to be flickering slightly - something that I'd not noticed it doing before, but it was also a clue as to what might be wrong.

This was clearly due to a power supply problem within the unit - but since power supplies in these sorts of devices are often very simple I figured that it would be pretty easy to fix and upon opening it up, I immediately recognized it as a typical "capacitive dropper".

A "capacitive dropper" power supply:

One of the simplest and cheapest ways to get a low current supply from mains voltage is to put a capacitor in series with it - and this is both smaller, lighter and less expensive than using a power transformer.  If you aren't familiar with the use of capacitor droppers, it may seem strange - but it can be quite effective and safe if done properly.  While a simple series resistance may seem more intuitive, it has the problem that it can generate quite a bit of heat.

Let's take a simple example.
Capacitor dropper supplies have, by their very nature, a very poor "power factor" - that is, the waveform of the voltage is not in phase with the current through them.
What this means is although our example may be pulling about (235 volts * 0.01 amps =) 2.35 watts,  because the voltage and current aren't following the sine wave at the same time, if you were to put a Kill-A-Watt on such a circuit, it would not read 2.35 watts - in fact, it may not read anything at all!

Why?  Remember that watts is "volts * amps" - but if the current and voltage are out of phase far enough, the voltage may be zero while the current is at maximum - and later in the sine wave's cycle, the voltage may be at maximum but the current may be zero - and in either case, the math tells us that that would be zero watts.

The Kill-A-Watt can also read "volt-amps" - which will correctly indicate the "volts * amps" - even when they don't happen at the same time.  If the "power factor" is 1.0, watts and volt-amps will be the same, but if it is something other than 1.0, the volt-amps will be higher than the watts.

Why do we care?  A generator or inverter can deliver only so many amps - and the "watts" rating that they have always assumes that the power factor is perfect.  If your generator load has a terrible power factor - say 0.5, that means that the "watts" reading is about half of the "volt-amps" reading - but since the amps is the same in either case it may appear that the generator cannot supply the power.  In other words, if your generator is trying to power, say, computers that have a terrible power factor, you may find that it will trip out at a lot lower wattage than you might expect!

Suppose that we need 10 milliamps at 5 volts for a hypothetical oven clock, ignoring the "converting to DC" part for the moment.  Doing the math we see that we need to drop from a nominal 230 volt mains (230 - 5 =) 225 volts - such as what might be found in the power supply for an electric oven.  With 225 volts to drop, Ohm's law tells us that we need (225 volts / 0.01 amps = ) 22500 ohms - so let's pick the closest standard value of 22000 ohms (a.k.a. 22k).

While we could use an 22k resistor for this, knowing the voltage drop (225 volts) and current (0.01 amps) we can see that we would be dissipating (225 volts * 0.01 amps = ) 2.25 watts.  While this doesn't sound like much power, containing this much heat in a very small box would cause it to get a bit warm - and it would heat up everything else as well, likely shortening the life of other components.

Alternatively, we can take into account the fact that our mains power will be a sine wave - 60 Hz in the U.S., 50 Hz in most other places - and use this to provide reactive current limiting - and by using the reactance of the capacitor, we can get the same voltage drop, but without any heat!  This lack of heat has to do with the fact that unlike a pure resistance, a pure reactance - like that of an ideal capacitor (or an ideal inductor) is theoretically loss-less.

An "inductive dropper" power supply is also possible - and in many instances, it would be preferable - but it is far easier and cheaper to make small, low-loss capacitors than low-loss inductors, so it is done in only very special circumstances.

Remembering that capacitors will block DC it makes sense that as the frequency increases, more current can flow through a given capacitance and this is calculated using a simple formula:

Z = 1 / (2 * Pi * Frequency * capacitance)

Z = reactance in ohms
Frequency is in Hertz
capacitance is in Farads
For our purposes we can simply consider the "reactance" to be equivalent to resistance - the clue being that like resistance, it's value is expressed in Ohms.

We can see from this formula that with frequency and capacitance in the denominator (bottom) of the fraction that if we increase either one, the equivalent resistance goes down in proportion.  Because from the math above we already know that we need 22k of equivalent resistance, we can rearrange the formula, swapping the locations of capacitance and "Z" to solve for capacitance, as in:

capacitance = 1 / (2 * Pi * Frequency * Z)

So, for 60 Hz:

1 / ( 2 * 3.14 * 60 Hz * 22000 ohm) = 0.00000012 Farads = 0.12 uF (microFarads)

And for 50 Hz:

1 / ( 2 * 3.14 * 50 Hz * 22000 ohm) = 0.00000014 Farads = 0.14 uF

Recalling that the lower the frequency, the higher the effective resistance and the lower the current so if we want to make this work for both 50 and 60 Hz systems, we'll pick the closest higher standard capacitor value for the lower frequency, 50 Hz, or 0.15 uF (a.k.a. 150nF) to make sure that we can get the minimum current that we need (0.01 amps) in either case.

At this point it's worth mentioning that neither the resistor or capacitor will actually reduce the voltage - it only limits the current:  If you want to reduce the voltage to something useful, some sort of regulator circuit is required - typically one that clamps it at or below the desired value.

A practical circuit for doing this is depicted in Figure 2, below:

Figure 2:
A typical "capacitive dropper" power supply.
This image is from the Wikipedia article
"Capacitive Power Supply" - link
This supply was designed for use with 220-240 volt mains as
described in the text.

Click on the image for a larger version.


While this may seem a bit complicated at first, it's easy to break down.

On the left we can see "C1" - the "dropping" capacitor that we calculated.  Across this capacitor is R2, a 470k ohm resistor and the purpose of this high-value resistor is to bleed off any charge that might be across the capacitor if it happened to be disconnected at the instant the the sine wave of the AC mains was at its peak, preventing it from shocking the user with that stored charge.  The resistor R1 is used to provide a bit of series current limiting:  If the power switch were closed at the instant that the sine wave of the mains was at its peak, there would be a sudden surge through capacitor C1 and R1 limits this amount to prevent damage to other components while its resistive value - 100 ohms - is quite low compared to the reactance of C1 so it produces little heat.
These capacitive dropper circuits, while simple, have a drawback:  They cannot and must not ever be used on circuits that can come in contact with anything that might be referenced to ground such as a body.  Because it is directly connected to the mains without the benefit of a transformer's isolation it is possible that if a person or animal touches any part of the circuit - or even touches something powered by it - it could result in a dangerous or fatal electrical shock!

The other caveat is that these circuits rely on the fact that a mains supply produces a pretty good sine wave.  If someone plugs this type of circuit into an AC power supply that is not a nice, clean sine wave - such as an inexpensive 12 volt power inverter - it may be fed with a square-type wave - sometimes called a "modified sine wave" - which, unlike a true sine wave, has a lot of energy at higher frequencies.  What this means is that the reactance at these higher frequencies will be lower and too much current will flow through the capactive dropper and the device that it is powering such as a night light - even a Kill-A-Watt - will probably be damaged/destroyed!

The rest of this circuit converts the AC to DC and limits the voltage to a reasonable value:  D1 is a full-wave DC rectifier and C2 filters the ripple to make it more pure.  Resistor R3 provides a bit of current limiting so that IC1, a programmable Zener diode (an ordinary Zener diode would have worked, too) that in combination with R4 and R5 can do the job of clamping the current to no more than 5 volts.

What was wrong with the Kill-A-Watt?

By now you have probably figured out where I was going with this discussion.

The display reading a high voltage indicated that it was possible that the computer within the Kill-A-Watt was getting too-low a voltage, causing its internal voltage reference to also be low which, in turn, caused its reading of the voltage to be too high.  The flickering display was also a clue indicating that there was AC mains ripple on its internal  power supply.  Carefully checking the voltage at the input of the onboard 5 volt regulator of the unit I saw that it was just 5.2 volts instead of the needed 7+ volts, with about 300 millivolts of ripple - far too low a voltage for the regulator to work properly and thus allowing ripple to get through, explaining why the display was flickering.

Figure 3:
The "bad" capacitor.  Although marked 0.47uF, it measured
0.31uF.  This is an "X1" type "safety capacitor, specifically
designed for this use.  Generally speaking, this could be
replaced with either an X1 or X2 type of appropriate voltage
Click on the image for a larger version.
Both of these pointed at the high likelihood of its "dropping capacitor" being bad, and upon opening it up I spotted the capacitor on the rear board.  Removing this capacitor, marked "0.47 uF" and checking it, I noted that it read 0.31uF - about 2/3rds of its original value.

On discovering this I knew that I would need another 0.47uF capacitor of the same voltage rating (e.g. 250 volts AC) and that it should be an "X1" or "X2" safety capacitor.  This last point is important as these types of capacitors are designed to fail open rather than short out - something that would certainly result in spectacular destruction of the device!  Its worth noting that for capacitive dropper power supplies, one should only use the appropriate safety-rated capacitors, such as an X1 or X2.  (Note, the original X1 capacitor was rated for somewhat higher peak voltage than the X2 with which it was replaced, but both are rated for this particular type of service at this voltage.)

How the capacitor failed:

This "safety" property is also a clue as to why its value reduced from 0.47 to 0.31uF.  One of the reasons why these capacitors can go bad is that their internal conductors, when subject to a fault - say a power surge, a significant spike, or just age - will fail open.

Often, this doesn't happen with the entire capacitor, but small portions of the thin, conductive metal film within the capacitor will degrade, the result being that the value of the capacitor will gradually drop.  In this case, as the capacitor failed, the Kill-A-Watt's circuitry was no longer able to get enough current, causing the voltage to be pulled below the threshold at which it would operate properly and resulting in erratic operation.
Figure 4:
The new capacitor on the board.  While the new capacitor was the
same height as the old - important for being able to fit in the case - it was
longer.  The circuit board has a series of holes on one side allowing
different-sized capacitors to be used.
Click on the image for a larger version.

I was able to rummage around in my box of film capacitors and find a 0.47uF film unit that was similarly rated and was a safety capacitor (with an "X2" rating).  Even though it was the same height as the old one (a "taller" capacitor would not have fit in the case) it was about 25% longer - but the designers of the circuit board had left a series of holes to accommodate several different sizes of capacitors.


With the new capacitor installed I plugged the unit in and it worked normally!

* * *

Many devices such as appliance timers, the electronic controls of stoves and ranges and nightlights uses these "capacitor dropper" supplies.  If you have one of these devices that has stopped working - or it has slowly "faded out" over time - there is a good chance that this is because the main dropper capacitor in its power supply has reduced in value.

If you do attempt a repair, extreme care must be taken if you test it as the entire circuit will be "lit up" by and not isolated from the power mains and posing a possible (fatal!) shock hazard.  If you do replace its capacitor it must not only be of the marked capacitance value of the original capacitor, but it must be an X1 or X2 type safety capacitor of equal or greater voltage rating!


This page stolen from ka7oei.blogspot.com


Wednesday, September 27, 2017

When Band-Pass/Band-Reject (Bp/Br) duplexers really aren't band-pass

In the repeater world there is a misconception that just because the duplexer may say "Band Pass, Band Reject" on its label - or even in its 'spec sheet - that it really does offer a proper band-pass response over a wide range of frequencies.

A close-in look at a typical Band-Pass/Band-Reject duplexer:

Take Figure 1, below, as an example.

Figure 1:
The magenta trace is that of a proper band-pass cavity, the yellow trace is that of a one side (3 cavities) of a 6-cavity Phelps-Dodge "Band-Pass/Band-Reject" duplexer while the cyan trace is the combination of the two.  The top of the yellow peak (with the "1" marker) represents the center frequency of the duplexer with a bit over 1dB loss.

In the analyzer trace above, the YELLOW is the response of one of half of a typical amateur "Band-Pass/Band-Reject" 6-can Phelps-Dodge duplexer tuned in the 2-meter amateur band and from this trace we can see several things happening:
  • As there should be, there is a peak at the pass frequency corresponding to the "band-pass" of the duplexer - in this case, a bit over 1dB loss.
  • Just above the peak - 600 kHz, to be precise - is a very deep notch - corresponding to the frequency to "band-reject" part of the name.  In reality, the depth of the notch depicted in Figure 1 is about 100dB, but the true depth is not apparent from the trace.
  • Once one moves about 1 division (1.5 MHz) either side of the peak/notch frequency, the attenuation isn't that great - only about 20-30dB, and the trace above the center seems to be on a asymptotic trajectory upwards (lower attenuation) as frequency increases.
From the above we can see that while this duplexer offers a "Band-Pass/Band-Reject" response, this occurs only at frequencies very near the input/output frequencies of our hypothetical repeater.  Once you get "farther away", this "band-pass" response diminishes.

On the other hand the MAGENTA trace shows a single band-pass cavity filter.  While its attenuation is not very high at the notch frequency - on the order of 10-15dB - it is apparent that by 2 MHz above the center frequency it is offering greater attenuation than the so-called "Band-Pass/Band-Reject" filter and that below the center, the trend indicates that they might cross over at a point just to the left of the trace.

Comment:  This "Bp/Br" nomenclature is widely applied amongst many manufacturers to duplexers that have the same response as the Phelps-Dodge duplexer above, including Motorola and Wacom - to name but a few.  It is the rare exception to find a "Band-pass/Band-Reject" duplexer that does NOT exhibit the properties described on this page!

Unless you have already installed some band-pass cavities on each leg of your duplexer and/or have done proper sweep responses at frequencies far removed from the designed frequency, you should not assume that your "Bp/Br" duplexer is truly Band-Pass/Band-Reject over a very wide frequency range!

Taking a wider view:

Figure 1 only spans about 7.5 MHz on either side of 2 meters, so let us widen it a bit as shown in Figure 2, below:

Figure 2:
Spanning from 30 MHz to 1 GHz, the same cavities/filters as noted above.  Again, the yellow trace is one half of a 6-cavity "Band-Pass/Band-Reject" duplexer, the magenta trace is the pass cavity alone and the cyan trace is the result of the bandpass cavity and the Bp/Br duplexer cascaded.  It should be noted that at odd-numbered harmonics the pass cavity
will present a narrow bandpass response that can be eliminated with the addition of a simple low-pass filter.

When looking over a much wider frequency range - 30 MHz to 1 GHz - the picture is quite different.  Based on this sweep we can see that our typical "6 can" duplexer - of which 3 "cans" of the transmit or receive side - are represented above in YELLOW and that for the majority of the frequency range there is relatively little attenuation offered overall!  Paying particular attention we see that the attenuation in much of the VHF-low TV band (channels 2-6) and the FM broadcast band is quite poor - on the order of 3-10dB - as is the case over much of the VHF-high (channels 7-13) and large sections of the UHF TV band.

What we can see from this picture is that if we rely on only our so-called "band-pass/band-reject" duplexer on a site with other services such as FM or TV broadcast, or even land-mobile, those frequencies just above the amateur band, such a duplexer offers relatively little protection against those signals getting into the transmitter or receiver.

Why it matters:

One might wonder why it would matter whether or not a duplexer offered good "far-off-frequency" rejection.

In many cases, particularly in mountainous areas, amateur repeaters are co-located at sites with other transmitters and if adequate filtering is not implemented those "other" signals can get into the repeater's receiver and/or transmitter.

The effects of these other signals' ingress into the receiver is easier to envision:  Many of us have observed that, while driving about, our mobile radios have occasionally been overloaded with other signals - the effect being that we are hearing signals on frequencies where they are not.  This phenomenon is the inevitable result of the receiver's mixer - a device that is designed specifically to make new signals out of multiple signals in the first place - synthesizing entirely new ones out of the several that get in via its antenna.

Several decades ago it was common for land-mobile VHF and UHF radios to have receivers that had very tight filtering as there were typically only a few, closely-spaced channels that were used.  By virtue of this extensive filtering it was unlikely that other signals' somewhat-removed frequencies could even get in and cause undesired signals to be generated.  These days most radios have very broad filtering in their receiver inputs - this, to allow a wide range of frequencies to be accommodated.  While convenient, this also has a down side:  Those formerly widely-spaced frequencies from other services now have little impediment and it is more likely that they will get into the receiver and produce undesired, spurious signals.

Many years ago it was also the case that many repeaters used modified land-mobile radios with their extensive filtering, but nowadays many "store bought" repeaters (such as the Icom D-Star and Yaesu Fusion lines) are simply beefed-up mobile radios with "broad as the proverbial barn door" filtering on their receivers.  While this is convenient for the repeater owner to not have to dig up some test equipment and tune up these receivers' narrow filters, this also means is that there are many instances where a club has replaced their old, crystal-controlled analog repeater with a new one - only to find out that it did not work well at all when these off-frequency signals - formerly blocked by the old receiver's narrow front-end filter - clobbered the new receiver.

What's worse is it is often the case that at many sites this sort of interference may be intermittent in nature - occurring only when a certain combination of transmitters happened to be online at once:  With most repeaters using subaudible tones for access, this degradation is often masked since the repeater may stay silent when it is being impacted, the only clue being that some users may suddenly find it difficult to get in to the repeater with a good signal at random times.  In other words, unless one uses the proper test equipment to take and record repeatable measurements at or away from the site, gradual or occasional degradation of the receiver's performance may not be so apparent.

An insidious problem:

While the overloading of a receiver is a familiar problem to many of us, it may not be as obvious that a similar thing can happen in a transmitter.  Like a receiver, a transmitter has the ability to take two signals and produce others via mixing.  For this to happen it usually requires that the "other" signals are very strong - but this is something that can happen at a busy radio site!

As a demonstration of what can happen, it was noted that via a VHF antenna atop Farnsworth Peak near Salt Lake City, Utah - a very busy broadcast site - one could read 100-150 milliwatts of RF on the coaxial cable at the input to the duplexer.  When this energy was analyzed it was found to be a combination of FM broadcast and UHF TV signals - the same transmitters that produce several megawatts of effective radiated power, combined.  If the same 6-cavity duplexer depicted in Figure 1 and Figure 2 was inserted in the line, this power would reduced - but only to the 20-50 milliwatt level!

This power was measured on the feedline of what would be a D-Star repeater, but prior to the installation of that repeater an analog Kenwood TK-740 repeater had been used for several months to assess coverage and performance prior to the installation of the Icom D-Star repeater.

On the day that the D-Star repeater was installed it was discovered that no-one could get into it, despite running 50 watts.  Upon analysis it was discovered that the 20-50 milliwatts coming back into the coax was causing the Icom repeater's receiver to be deafened (desensed) by about 40dB - a factor of 10,000-fold!  Upon reconnecting the TK-740, no problems were noted and it was realized that the Kenwood repeater had a more traditional, narrow-band helical resonator filter assembly in its front end and compared to the more modern "broad-band" front end of the Icom repeater - which used parts of modified mobile radios - that the power in from the antenna was completely demolishing its receiver!
Figure 3:
A typical Motorola  4-can duplexer for UHF.  Just like its VHF counterparts
it easily passes energy at frequencies above and below its tuned frequency.
Click on the image for a larger version.

The installation of two bandpass cavities on the receive side allowed the Icom repeater to work as well as the old Kenwood analog repeater with its superior filtering - but this brings up the question about what might happen on transmit?

The transmitter can also act as a mixer:  Multiple signals - one of which might be the repeater's output frequency - can combine within the circuitry and instead of only the transmit frequency being emitted, some conglomeration of signals can appear!

In the example of a VHF transmitter we know that while the low-pass filter may remove the frequencies above the 2-meter band - say, UHF land-mobile and UHF TV - it will do nothing to remove energy from FM broadcast stations.  Similarly, if this were a UHF transmitter, its low-pass filter might remove some of the UHF land-mobile and UHF TV energy, but it would have little effect on signals from FM broadcast and VHF high and low band TV.

It might be suggested at this point that the use of an isolator - a device that, while allowing the transmitter's energy to go to the antenna, it directs any power coming back down the coax into a dummy load so that it cannot even get to the transmitter, might be appropriate here - and this would be correct...  Mostly.  While these devices are invaluable - and even required equipment at many radio sites - to both prevent RF from nearby-frequency transmitters from getting into your transmitter - and then re-radiated again and also to insulate your transmitter from a bad VSWR - it is far less-effective when the frequencies that are coming back down the coaxial cable are away from its design frequency.  In other words, while your VHF isolator may work okay from 140 to 160 MHz, it will probably do comparatively little at the FM broadcast band and in the UHF range.

Adding a pass cavity:

It is, therefore, a very good idea to equip any repeater with at least two pass cavities:  One on the receiver, tuned to the receive frequency and another on the transmitter, after the isolator, tuned to the transmit frequency.

If one examines both Figures 1 and 2 you can see the Magenta trace showing the response of a single pass cavity.  When compared to the response of a typical Bp/Br duplexer (the YELLOW trace) the general trend is that the farther away one gets from the pass frequency, the more attenuation it offers.  One quirk of band-pass cavities is that they also have a response at odd multiples of their pass frequency, which means that a 2-meter pass cavity will also pass energy around the low end of 70cm, around 700 MHz, and so-on.  In the case of 2 meters, this spurious response could be eliminated by the addition of a low-pass filter.

Both figures 1 and 2 also show something else:  What happens if you cascade a Bp/Br duplexer with a single pass cavity (the CYAN trace)?  For the most part the overall attenuation of the two sets of filters is complementary - that is, the "best of both worlds."  As can be seen the simple addition of a pass cavity knocks out almost everything that is off-frequency from that which is desired.
Figure 4:
A typical "4 can" (2 on transmit, 2 on receive)
2-meter duplexer.  Even though it is labeled
as a "band-pass/band reject" unit, this refers only
to the two frequencies of interest - the transmit
and receive - and not to the RF spectrum overall!
The plots in Figures 1 and 2 are from a similar,
"6-can" (3 on tx, 3 on rx) duplexer, but the
rejection of frequencies "far removed" from
where they are tuned is comparable.
Click on the image for a larger version.

Bandpass cavities have another important property as well:  Lightning protection.  Because lightning is a broad-band energy spike, it would make sense that if you reduce the passband of the path from the antenna, less RF energy, overall, will get in - and the use of a passband cavity also guarantees that there is NO DC path from the center pin of the coax from the antenna to the center pin of the coax going to the radio.  One radio club - the Utah Amateur Radio Club - has several mountain top repeaters and there have been a number of instances where the repeater antenna has taken a direct lightning hit, sometimes destroying the antenna, but never has the attached receiver or transmitter ever been damaged.


If you are installing a repeater or other radio at a site with any other transmitters you should not assume that just because the label or specifications of the duplexer say that it is "Band-Pass/Band-Reject" that it will actually do so over a wide range of frequencies.  Again, most brands of duplexers will simply pass, with relatively little attenuation, those frequencies that are far removed from the operating frequencies and the "band-pass/band-reject" nature is limited to the specific frequencies of interest - such as the transmit side of the duplexer passing the transmit signal but rejecting energy at the receive frequency.

Such a duplexer should always be supplemented with at least one bandpass cavity for the transmit frequency and another for the receive frequency to provide additional off-frequency rejection - and adding a simple low-pass filter on each leg won't hurt, either.  While these added elements result in higher signal loss, this need only be 1dB or less in most cases.  Adding this extra cavity will increase the effectiveness of an isolator on the transmitter - which works only well near its design frequency anyway - but it will also prevent excess, off-frequency energy from getting into the repeater's receiver which, these days, is more typically a "mobile" unit with a very broad front end that has been converted.  Finally, the humble band-pass cavity provides good lightning protection, just by its very nature!


This page stolen from ka7oei.blogspot.com


Tuesday, September 12, 2017

Using an inexpensive MPPT controller in a portable solar charging system

As I'm wont to do, I occasionally go backpacking, carrying (a bit too much!) gear with me - some of it being electronic such as a camera, GPS receiver and ham radio(s).  Because I'm usually out for a week or so - and also because I often have others with me that may also have battery-powered gear, there arises the need for a way to keep others' batteries charged as well.

Having done this for decades I've carried different panels with me over that time, wearing some of them out in the process, so it was time for a "refresh" and a new system using both more-current technology and based, in part, on past lessons learned.

Why 12 volt panels?

If you look about you'll find that there are a lot of panels nowadays that are designed to charge USB devices -which is fine if all you need to do is charge USB devices, but many cameras, GPS receivers, radios and other things aren't necessarily compatible with being charged from just 5 volts.  The better solution in these cases is to start out with a higher voltage - say that from a "12 volt" panel intended for also keeping a car battery afloat - and convert it down to the desired voltage(s) as needed.

After a bit of difficulty in finding a small, lightweight panel that natively produced the raw "12 volts" output from the array (actually, 16-22 volts unloaded) I found a 18 watt folding panel that weighed just a bit more than a pound by itself.  It happened to also include a USB charge socket - but can be hard to find one without that accessory!
Figure 1:
The 6-7aH LiFePO4 battery, MPPT controller and "18 watt" solar panel.
The odd shape of the LiFePO4 battery is due to its being intended to power
bicycle lighting, fitting in a water bottle holder.
Click on the image for a larger version.

By operating at "12 volts" you now have the choice of practically any charging device that can be plugged into a car's 12 volt accessory socket (e.g. cigarette lighter) and there are plenty of those about for nearly anything from AA/AAA chargers for things like GPS receivers and flashlights to those designed to charge your camera.  An advantage of these devices is that nowadays, they are typically very small and lightweight, using switching power converters to take the panels voltage down to what is needed with relatively little loss.

But there is a problem.

If you use a switching power converter to take a high voltage down to a lower voltage, it will dutifully try to maintain a constant power output - which means that it will also attempt to maintain a constant power input as well - and this can lead to a vexing problem.

Take as an example of a switching power converter that is 100% efficient, charging a 5 volt device at 2 amps, or (5 volts * 2 amps =) 10 watts.

If we are feeding this power converter with 15 volts, we need (10 watts / 15 volts =) 0.66 amps, but if we are supplying it with just 10 volts, we will need (10 watts / 10 volts =) 1 amp - all the way down to 2 amps at 5 volts.  What this means is that while we always have 10 watts with these differing voltages, we will need more current as the voltage from the panel goes down.

Now suppose that we have a 15 watt solar panel.  As is the nature of solar panels, there is a "magic" voltage at which our wattage (volts * amps) will be maximum, but there is also a maximum current that a panel will produce that remains more or less constant, regardless of the voltage.  What this means is that if our panel can produce its maximum power at 15 volts where it is producing 1 amps, if we overload the panel slightly and cause its voltage to go down to, say, 10 volts, it will still be producing about 1 amp - but only making (10 volts * 1 amp =) 10 watts of power!  Clearly, if we wish to extract maximum power to make the most of daylight we will want to pick the voltage at which we can get the maximum power.

Dealing with "stupid" power converters:

Suppose that, in our example, we are happily producing 10 watts of power to charge that 5 volt battery at 2 amps.  At 15 volts, we need only 0.66 amps to get that 10 watts, but then a black cloud comes over and the panel can now produce only 0.25 amps.  Because our switching converter is "stupid", it will always try to pull 10 watts - but when it does so, the voltage on its input, from the panel, will drop.  In this scenario, our voltage converter will pull the voltage all of the way down to about 5 volts - but since the panel can only produce 0.25 amps, we will be charging with only (5 volts * 0.25 amps =) 1.25 watts.

Now, the sun comes out - but the switching converter, being stupid, is still trying to pull 10 watts, but since it has pulled the voltage down to 5 volts to charge the battery, we will need 2 amps to feed the converter the 10 watts that it will need to be happy, but since our panel can never produce more than an amp, it will be stuck there, forever, producing about only (5 volts * 1 amp =) 5 watts.

If we were to disconnect the battery being charged momentarily so that the switching converter no longer saw its load and needed to try to output 10 watts, the input voltage would go back up to 15 volts - and then when we reconnected the battery, it would happily pull 0.66 amps at 15 volts again and resume charging the battery at 10 watts - but it will never "reset" itself on its own.

What this means is that you should NEVER connect a standard switching voltage converter directly to a solar panel or it will get "stuck" at a lower voltage and power if the available panel output drops below the required load - even for a moment!

Work-arounds to this "stuck regulator" problem:

The Linear regulator

One obvious work-around to this problem where a switching regulator gets "stuck" is to simply avoid using them, instead using an old-fashioned linear regulator such as an LM317 variable regulator or a fixed-voltage regulator in the 78xx series (e.g. 7805 for our 5 volt example).  This type of regulator, if outputting 1 amp, will also require an input of 1 amp, the difference in voltage being lost as heat.  If a black cloud comes over - or it is simply morning/evening with less light - and the panel outputs less current, that lower current will simply be passed along to the load.

The problem with a linear regulator is that it can be very inefficient, particularly if the voltage is being dropped significantly.  For example, if you were to charge the 5 volt device at 1 amp from a panel producing 15 volts, your panel would be producing (15 volts * 1 amp =) 15 watts, you would be charging your device at (5 volts * 1 amp =) 5 watts, but your linear regulator would be burning up the difference -10 watts of heat - wasting most of the energy.  On the up side, it simply cannot get "stuck" like a switching converter, it is very simple, it will cause no radio interference, and it is nearly foolproof in its operation.

Figure 2:
The front of the EvilBay "5 amp MPPT charger".  This unit is
an inexpensive unit that used the "Constant Voltage" algorithm (see
below) and designed primarily to charge lithium chemistry batteries.
One of the potentiometers is used to set the final charge voltage - between
14.2 and 14.6 volts for a "4 cell" LiFePO4 - and the other is set to the "maximum
power voltage" of the panels to which it is connected.  This unit- as do most
inexpensive units -require that the MPPT voltage of the panels be 2-3 volts
higher than the final charge voltage of the battery being charged.
Click on the image for a larger version.

MPPT power controller

A better solution in terms of power utilization would be to use a more intelligent device such as an MPPT (Maximum Power Point Tracking) regulator.  This is a "smarter" version of the switching regulator that, by design, avoids getting "stuck" by tracking how much power is actually available from the solar panel and never tries to pull more current than is available.  For this discussion we'll talk about the two most common types of MPPT systems.

"Perturb and Observe" MPPT:

This method monitors both the current and voltage being delivered by the panel and internally, calculates the wattage (e.g. volts * amps) on the fly and under normal conditions, and it will change the amount of current that it is trying to pull from the panel up and down slightly to see what happens, hence the name "Perturb and Observe" (a.k.a. "P&O").

For example, suppose that our goal is to get the maximum amount of power and our panel is producing 15 volts at 1 amp, or 15 watts.  Now, the MPPT controller will try to pull, say, 1.1 amps from the panel.  If the panel voltage drops slightly to 14.5 volts so we are now supplying (1.1 amps * 14.5 volts =) 15.95 watts and we were successful in pulling more power to be delivered to our load.  Now, it will try again, this time to pull 1.2 amps from the panel, but it finds that when it does so the panel voltage drops to 12.5 volts and we are now getting (1.2 amps * 12.5 volts =) 15 watts - clearly a decrease!  Realizing its "mistake" it will quickly go back to pulling 1.1 amps to get back to the setting where it can pull more power.  After this it may reduce its current to 1 amp again to "see" if things have changed and whether or not we can get more power - or if, perhaps, the amount of sunlight has dropped - such that trying to pull less current is the optimal setting.

By constantly "trying" different current combinations to see what provides the most power it will be able to track the different conditions that can affect power output of the solar panel - namely the amount of sun hitting it, the angle of that sun and to a lesser extent, the temperature of the solar panel.

Figure 3:
Curves showing the voltage versus current of a typical solar cell.  Once
the current goes above a certain point, the voltage output of a cell
drops dramatically.  The squiggly, vertical line indicates where
the maximum power (e.g. volts * amps) is obtained along the curve.
The upper graphs depict a typical curve with larger amounts
of light while the lower graphs are for smaller amounts of
impinging light.
This graph is from the Wikipedia article about MPPT - link
Click on the image for a slightly larger version.
"Constant Voltage" MPPT:

If you look at the current-versus-voltage curve of a typical solar panel as depicted in Figure 3 you'll note that there is a voltage at which the most power (volts * amps) can be produced (the squiggly vertical line) - a value typically around 70-80% of the open-circuit voltage, or somewhere in the area of 15-18 volts for a typical "12 volt" solar panel made these days.

Many "12 volt" panels currently being made are intended for use with MPPT controllers and have a bit of extra voltage "overhead" as compared to "12 volt" panels made many years ago before MPPT charging regimens were common.  What this means is that a modern "12 volt" panel may have an maximum power point voltage of 16-17 volts as opposed to 14-15 volts for an "older" panel made 10+ years ago.

One thing that you might notice is that, at least for higher amounts of light, the optimal voltage for maximum power for our hypothetical is about the same - approximately 0.45 volts per cell.  We can, therefore, design an MPPT circuit that is designed to cause the panel to operate only at that optimum voltage:  If the sunlight is reduced and the voltage starts to drop, the circuit will decrease the current it is pulling, but if the sunlight increases and the voltage starts to rise, it will increase the current to pull the voltage back down.

This method is simpler and cheaper to implement than the "Perturb and Observe" method because one does not need to monitor the current from the panel (e.g. it cares only about the voltage) and there does not need to be a small computer or some sort of logic to keep track of the previous adjustments.  For the Constant Voltage (e.g. "CV") method the circuit does only one thing:  Adjust the current up and down to keep the panel voltage constant.

As can be seen from Figure 3, the method of using "one voltage for all situations" is not optimal for all conditions as the voltage at which the most power can be obtained changes with the amount of light, which also changes with the temperature of the panel, age, shading, etc.  The end result of this rather simplistic method of optimization is that one ends up with somewhat lower efficiency overall - around 80% of the power that one might get with a well-performing P&O scheme according to some research. Ref. 1

This method can be optimized somewhat if the circuit is adjusted for maximum power output under "typical" conditions that one might encounter.  For example, if the CV voltage is adjusted when the panel is under (more or less) maximum sun on a typical day, it will produce power most efficiently when the solar power production is at its highest and making the greatest contribution to the task at hand - such as charging a battery.  In this case, it won't be as well optimized as well when the illumination is lower (e.g. morning or evening) but because the amount of energy available during these times will be lower anyway, a bit of extra loss from the lack of optimization at those times will be less significant than the same percentage of loss during peak production time.

Despite the lower efficiency, the Constant Voltage method is often found as a single-chip solution to implement low-cost MPPT, providing better performance than non-MPPT alternatives.

Actual implementation:

I was able to find an inexpensive (less than US$10, shipped) MPPT charge control board on EvilBay (called "5A MPPT Solar Panel Regulator Battery Charging") that was adjustable to allow its use with solar panels with open-circuit voltages ranging from 9 to 28 volts and its output being capable of being adjusted from 5 to about 17 volts.  This small board had built-in current regulation set to a maximum of 5 amps - more than adequate for the 18 watt panel that I would be using.

From the pictures on the EvilBay posting - and also once I had it in-hand - I could see that it used the Consonance CN3722 MPPT chip. Ref. 2  This chip performs Constant Voltage (CV) MPPT functions and provides a current-regulated output with the components on the EvilBay circuit board limiting the current to a maximum of 5 amps.  Additionally, this board, when used to charge a battery directly, may be adjusted, using onboard potentiometers, to be optimized for the solar panel's Maximum Power voltage (typically called "Vmp" in panels' specifications) and adjusted for the finish charge voltage for the battery itself, being suitable for many types of Lithium-Ion chemistries - including the "12 volt" LiFePO4 that I was going to use.
Figure 4:
The back side of the MPPT controller showing the heat sink and connections.
The heat sink is adequate for the ratings of this unit.  To save weight and bulk,
the unit was not put in a case, but rather the wires "zip tied" to the mounting
holes to prevent fatiguing of the wires - and to permit the wires themselves to
to offer a bit of protection to the top-side components.
Click on the image for a larger version.

To this end, my portable charging system consists of the solar panel, this MPPT controller and a LiFePO4 battery to provide a steady bus voltage compatible with 12 volt chargers and devices.  By including this "ballast" battery, the source voltage for all of the devices being charged is constant and as long as the average current being pulled from the battery is commensurate with the average solar charging current, it will "ride through" wide variations in solar illumination.  This method has the obvious advantage that a charge accumulated throughout the day can be used in the evening/night to charge those devices or even be used to top off batteries when one is hiking and the panel may not be deployed.

Tweaking the "Constant Voltage" MPPT board:

As noted, the EvilBay CN3722 board had two trimmer potentiometers:  One for setting the output voltage - which would be the "finish" charge voltage for the battery and another for setting the Constant Voltage MPPT point for the panel to be used.

Setting the output voltage is pretty easy:  After connecting it to a bench supply set for 4-6 volts above the desired voltage I connected a fairly light load to the output terminal and set it for the proper voltage.  For a "12 volt" LiFePO4 battery, this will be between 14.2 and 14.6 volts while the setting for a more conventional "12 volt" LiIon battery would be between 16.2-16.8 volts, depending on the chemistry and desired finish voltage. Ref. 3  Once this adjustment has been done I connected a fully-charged battery to the output along with a light load and power-cycled the MPPT controller and watched it as it stabilized, readjusting the voltage as necessary.

Setting the MPPT voltage is a bit tricker.  In this case, a partially discharged battery of the same type and voltage that will be ultimately used as was adjusted above is connected to the output of the MPPT controller in series with an ammeter on the output.  With the solar panel that is to be used connected and laid out in full sun, the "MPPT Voltage" potentiometer is adjusted for maximum current into the battery being charged.  Again, this step requires a partially-discharged battery so that it will take all of the charging current that is available from the panel.

Note that the above procedure presumes that the solar panel is too small to produce enough power to cause the MPPT battery charger itself to go into current limiting - in which case, the current limit is that of the panel itself - which means that the maximum current that is seen at the charging terminal of the battery reflects the maximum power that can be pulled from the panel.  For example, with a panel producing 18 watts and charging a battery at 13.5 volts we could only ever expect to see about 1.33 amps flowing into the battery due to the inability of the panel to supply more power, but maximizing this current by adjusting the "MPPT" voltage control permits optimization for that particular solar panel.

If the panel is large enough to cause the MPPT controller to current-limit its charging current (around 5 amps for the MPPT controller that I used) then it may be that the panel is oversized slightly for the task - at least at midday, when there is peak sun.  In that case one would make the same adjustment in the morning or evening when the amount of light was low enough that the panel could not cause the charger to current-limit or simply block a section of the panel.

While this charging board would be able to connect directly to almost any rechargeable Lithium battery, it would be awkward try to adapt it for each type of battery that one might need to charge "on the trail" so I decided to carry with me a small "12 volt" LiFePO4 battery as well:  The solar panel and MPPT controller would charge that battery and then the various lightweight "12 volt" chargers for the different batteries to be charged would connect to it.

Its worth noting that MPPT power controllers use switching techniques to do the efficient conversion of voltage.  What this means is that if, attached to - or nearby - is a sensitive radio - particularly an  HF (shortwave) transceiver - the switching operation of the MPPT controller may cause interference unless the controller is enclosed in an RF-tight box with appropriate filtering on the input and output leads.  In practice I haven't found this to be an issue as any HF operation is usually done in the evening, at camp, as things are winding down and the sun isn't out, anyway, so the unit is not in service at that time.

Final comments

While the "ballast battery" method has an obvious weight and volume penalty, it has the advantage that if you need to charge a number of different devices, it is possible to find a very small and light 12 volt "car" charger for almost any type of battery that you can imagine.  The other advantage is that with a 12 volt battery that is being charged directly from the MPPT controller, it acts as "ballast", allowing the charging of this "main" battery opportunistically with the available light as well as permitting the charging of the other batteries at any time - including overnight!

The 18 watt panel weighs 519 grams (1.14 pounds), the MPPT charge controller with attached wires and connectors weighs 80 grams (0.18 pounds), a cable connecting the panel to the MPPT controller weights 60 grams (0.13 pounds) while the 6-7 amp-hour LiFePO4 battery pictured in Figure 1 . Ref. 4  weighs in at 861 grams (1.9 pounds).   The total weight of this power system is about 1520 grams (3.35 pounds) - which can be quite a bit to carry in a backpack, but considering that it can provide the power needs of a fairly large group and that this weight can be distributed amongst several people, if necessary, it is "tolerable" for all but those occasions where it is imperative that there is the utmost in weight savings.  For a "grab and go" kit that will be transported via a vehicle and carried only a short distance this amount weight is likely not much of an issue.

* * *

1 - The article "Energy comparison of MPPT techniques for PV Systems" - link - describes several MPPT schemes, how they work, and provides comparison as to how they perform under various (simulated) conditions.

2 - Consonance Electric CN3722 Constant Voltage (CV) MPPT multichemistry battery charger/regulator - Datasheet link.

3 - Particularly true for LiIon cells, reducing the finish (e.g. cut off) voltage by 5-10%, while reducing the available cell capacity, can improve the cell's longevity.  What this means is that if the cut-off voltage of a typical modern LiIon cell, which is nominally 4.2 volts, is reduced to 4.0 volts, all other conditions being equal this can have the potential to double the useful working life.  While this lower cut off voltage may initially reduce the available capacity by as much as 25%, a cell consistently charged to the full 4.2 volts will probably lose this much capacity in a year or so, anyway whereas it will lose much less capacity than that at the lower voltage.  For additional information regarding increasing the longevity of LiIon cells see the Battery University web page "How to Prolong Lithium-based Batteries" - link and its reference sources.

4 - This LiFePO4 battery has been featured several times before - see these links:
  • Problems with LiFePO4 batteries - link
  • Follow-up:  LiFePO4 batteries revisited - equalization of cells - link


This page stolen from "ka7oei.blogspot.com".

Monday, August 28, 2017

Monitoring the "CT" MedFER beacon from "Eclipse land"

Figure 1:
The MedFER beacon and vertical, tophatted
antenna on the metal roof of my house, attached
to an evaporative ("swamp") cooler.
Click on the image for a larger version.
I must admit that I was "part of the problem" - that is, one of the hordes of people that went north to view the August 21, 2017 eclipse along its line of totality.  In my case I left my home near Salt Lake City, Utah on the Friday before at about 4AM, arriving 4 hours and 10 minutes later - this, after a couple of rest and fuel stops.  On the return trip I waited until 9:30 AM on the Wednesday after, a trip that also took almost exactly 4 hours and 10 minutes, including a stop or two - and I had no traffic in either case.

This post isn't about my eclipse experiences, though, but rather the receiving of my "MedFER" beacon at a distance of about 230 miles (approx. 370km) as a crow flies.

What's a MedFER beacon?

In a previous post I described a stand-alone PSK31 beacon operating just below 1705 kHz at the very top of the AM broadcast ("Mediumwave") band under FCC Part 15 §219 (read those rules here).  This portion of the FCC rules allow the operation of a transmitter on any frequency (barring interference) between 510 and 1705 kHz with an input power of 100 milliwatts using an antenna that is no longer than 3 meters, "including ground lead."  By operating just below the very top of the allowed frequency range I could maximize my antenna's efficiency and place my signal as far away from the sidebands and splatter of the few stations (seven in the U.S. and Mexico) that operate on 1700 kHz.
Figure 2:
Inside the loading coil, showing the variometer, used to fine-
tune the inductance to bring the antenna system to
resonance.  This coil is mounted in a plastic 5-gallon
bucket, inverted, to protect it from weather.

As described in the article linked above, this beacon uses a Class-E output amplifier which allows more than 90% of its DC input power to be delivered as RF, making the most of the 100 milliwatt restriction of the input power.  To maximize the efficiency of the antenna system a large loading coil with a variometer is used, wound using copper tubing, to counteract the reactance of the antenna.  The antenna itself is two pieces:  A section, 1 meter long, mounted to the evaporative cooler sitting on and connected to the metal roof of my house and above that, isolated from the bottom section is an additional 2-meter long section that is tophatted to increase the capacitance and reduce the required amount of loading inductance to improve overall efficiency.

As it happens, the antenna is mounted in almost exactly the center of the metal roof of my house so one of the main sources of loss - the ground - is significantly reduced, but even with all of this effort the measured feedpoint resistance is between 13 and 17 ohms implying an overall antenna efficiency of just a few percent at most.

Figure 3:
The tophatted vertical antenna, loading coil and transmitter, looking up
from the base.  In the extreme foreground along the left side of the
picture can be part of the weather-resistant metal box that
contains the transmitter.
Click on the image for a larger version.
Originally intended only as a PSK31 beacon I later added the capability of operating on 1700 kHz using AM and being able to do on/off keying of the carrier at the original "1705" kHz PSK31 frequency, permitting the transmission of Morse code messages.  For the purpose of maximizing the likelihood of the signal being detected, this last mode - Morse - I operate using "QRSS3", a "Slow" Morse sending speed where the "dit" length of the characters is being transmitted is 3 seconds - as is the space between character elements - and a "dah" and the space between characters themselves is 9 seconds.

Sending Morse code at such a low speed allows sub-Hz detection bandwidths to be used, greatly improving the rejection of other signals and increasing the probability that the possibly-minute amount of energy reaching the antenna may be detected.

Detecting it from afar:

Even though this beacon had been "received" as far away as Vancouver, BC (about 800 miles, or 1300 km) using QRSS during deep, winter nights, I was curious if I could hear it during a summer night near Moore, ID at that 230 mile (370km) distance.  Because we were "camping" in a friend's yard, we (Ron, K7RJ and I) had to put up an antenna to receive the signal.

The first first antenna that we put up received strong AC mains-related noise - likely because it paralleled the power line along the road.  Re-stringing the same 125-ish feet (about 37 meters) of antenna wire at a right angle to the power line and stretching out a counterpoise along the ground got better results:  Somewhat less power line noise.  It was quickly discovered that I needed to run both the receiver and the laptop on battery as any connection to the power line seemed to conduct noise into the receiver - probably a combination of noise already on the power line as well as the low-level harmonics of the computer's switching power supply.

I'd originally tried using my SDR-14 receiver, but I soon realized that between the rather low signal levels being intercepted by the wire - which was only about 10 feet (3 meters) off the ground - and the relative insensitivity of this device, I wasn't able to "drive" its A/D converter very hard, resulting in considerable "dilution" of the received signals due to quantization noise.  In other words, it was probably only using 4-6 bits of the device's 14 bit A/D converter!

I then switched to my FT-817 (with a TCXO known to be accurate to within a few tenths of a part-per-million) which had no troubling "hearing" the background noise.  Feeding the output of the '817 into an external 24 bit USB sound card (the sound card input of my fairly high-end laptop - as with most laptops - is really "sucky") I did a "sanity check" of the frequency calibration of the FT-817 and the sound card's sample rate using the 10 MHz WWV signal and found it to be within a Hertz of the correct frequency and then re-tuned the receiver to 1704.00 kHz using upper-sideband.  It had been several years since I'd measured the precise frequency of my MedFER beacon's carrier, last being observed at 1704.966 kHz, so I knew that it would be "pretty close" to that value - but I wasn't sure how much its crystal might have drifted over time.

For the signal analysis I used both "Spectrum Lab" by DL4YHF (link here) and the "Argo" program by I2PHD (link here).  Spectrum Lab is a general-purpose spectral analysis program with a lot of configurability which means that there are a lot of "knobs" to tweak, but Argo is purposely designed for modes like QRSS using optimized, built-in presets and it was via Argo that I first spotted some suspiciously coherent signals at an audio frequency of between 978 and 980 Hz, corresponding to an RF carrier frequency of 1704.978 to 1704.980 kHz - a bit higher than I'd expected.

As we watched the screen we could see a line appear and disappear with the QSB (fading) and we finally got a segment that was strong enough to discern the callsign that I was sending - my initials "CT".

Figure 4
An annotated screen capture of a brief reception, about 45 minutes after local sunset, of the "CT" beacon using QRSS3 with the "oldest" signals at the left.  As can be seen, the signal fades in so that the "T" of a previous ID, a complete "CT" and a partial "C" and a final "T" can be seen on the far right.  Along the top of the screen we see that ARGO is reporting the peak signals to be at an audio frequency of 978.82 Hz which, assuming that the FT-817 is accurately tuned to 1704.00 kHz indicates an actual transmit frequency of about 1704.979 kHz.

As we continued to watch the ARGO display now and again we could see the signal fade in and out and be occasionally clobbered by the sidebands of an AM radio station on 1700 kHz - at least until something was turned on in a nearby house that put interference everywhere around the receive frequency.

The original plan:

The main reason for leaving the MedFER beacon on the air during the eclipse and going through the trouble of setting up an antenna was to see if, during the depth of the eclipse, its signal popped up, out of the noise - the idea being that the ionospheric "D" layer would disassociate in the temporary darkness along the path between my home where the eclipse would attain about 91% totality and the receive location within the path of totality, hoping that its signal would emerge.  In preparation for this I set up the receiver and the ARGO program to automatically capture - and then re-checked it about 5 minutes before totality.

Unfortunately, while I'd properly set up ARGO to capture, I'd not noticed that I'd failed to click on the "Start Capturing" button in ARGO and the computer happily ran unattended until, perhaps, 20 minutes after totality, so I have no way of knowing if the signal did pop up during that time.  I do know that when I'd checked on it a few minutes before totality there was no sign of the "CT" beacon on the display.

In retrospect, I should have done several things differently:
  • Brought a shielded "H" loop that would offer a bit of receive signal directionality and the ability to reject some of the locally-generated noise and would have saved us the hassle of stringing hundreds of feet of wire through trees.  Some amplification with this loop would also have helped the SDR-14 work properly.
  • Actually checked to make certain that the screen capture was activated!
  • Record the entire event to an uncompressed audio (e.g. ".WAV") file so that it could be re-analyzed later.
 Oh well, you live and learn!

P.S.  After I returned I measured the carrier frequency of the MedFER beacon using a GPS-locked frequency reference and found it to be 1704.979 kHz - just what was measured from afar!


This information stolen from ka7oei.blogspot.com