Second Signal Scanner Monitor
Summary of Reported Issues
27 JUNE 2007 - RELEASE CANDIDATE 1
If you have a comment, question, feedback, or another issue, please send me email!
I really do want to hear as many comments as possible!

Installation:

ZIP File Problem:
We have had one report of the initial zip file failing to unpack, however that user was able to download a new copy and had no difficulty with that fresh copy.

.NET Framework:
A few users on older machines running Microsoft Windows 2000 have had to install the .NET framework from Microsoft. While there is no charge from Microsoft for this framework (which is included on XP and Windows 2003 machines) these users have found that both the 2.0 and 1.3 versions of the framework had to be downloaded and installed. In all versions later than Windows 2000, only the .NET framework version 2.0 has had to be downloaded as an update to the original 1.0 version that some workstation were installed with. The .NET framework can be time consuming to install on these older machines.

Operational:

False Tone Reporting:
I’ve seen some real confusion related to ‘false’ tones being reported in the log files. Please take a moment to understand the different purposes of the “discovery” or “general” tone settings compared to the actual tone “trap” functionality.

The general settings help the application pick out and log all of the “potential” tones from any background chatter. In this general mode, it is better to have more possible tones reported than to lose even a single “real” one. Use this mode to discover the real tones on which can be built “traps” which take the actions you want when tone activation happens. For example, a nearby town includes a ‘warble’ between their first and second tone for both fire and rescue. The log files and “discovery” window clearly show many different possible tone frequencies, but show two that stand out as longer and as clearer than the others. It is these frequencies which can be used to build the tone trap.

The trap monitoring process ignores any frequency other than the ones it is looking for according to the trap form. It watches all the possible tones for one that matches the first condition and waits for that tone to be present for the duration specified in the trap, and then logs a trap condition hit and moves on to the next condition. It watches for the second tone condition to hit within the specified time before it moves on. When it completes all the active trap frequencies (usually two, but up to four can be specified) it considers the trap “complete” which is logged, and the specified action taken.

Seeing false hits on one condition or another should be fairly common. An indication that the first condition is hit but then nothing on the second and no trap complete just means that some noise came across that wasn’t sufficiently like the tone you specified in your trap to cause the software to activate that trap. Compare this to a Motorola Minitor, which stays silent in these cases. It is only when the entire sequence of specified trap conditions are met (both reeds if you’re talking about a Minitor), does the activation happen.

What is the moral of this story? You can entirely avoid “falsing” in your traps by setting them with enough specificity on the minimum tone duration and maximum wait time between conditions.

You can also reduce the number of false possible tones by increasing either the minimum tone duration or the minimum signal to noise setting. In the case the minimum tone duration, I’m currently recommending 370ms which would mean a tone is possible if a frequency is “dominant” in two consecutive blocks of audio, each block being 185 milliseconds in length. Specifying a setting of 555 milliseconds meaning three consecutive blocks of a frequency being dominant will greatly reduce the number of false positives and should work in most cases, but may miss tones in some departments and may not strictly comply with tone specifications that require finer grain detection. In the case of increasing the signal to noise level, this greatly depends on how much background “hum” and other noise you have on the scanner input to the pc. If you have a really nice clear signal, a setting as high as 85 or 90 will work for you and will cut out a lot more of the chatter, allowing you to use a shorter minimum duration. I’ve seen some poor radio carriers that have enough background on the channel to need a setting as low as 40. The lower the setting you use for signal to noise, the longer the setting you should use for minimum tone duration, unfortunately.


If you have a comment, question, feedback, or another issue, please send me email!
I really do want to hear as many comments as possible!