Margit --
Yes! Thanks for circulating this summary article.
It is much more a reasoned empirical discussion
of the sort all of us concerned with vote voodoo need
to understand and then communicate to fellow citizens.
Yet many folks take one look at so-called "technical"
stuff and turn off their brains.
It looks like the (apparently knowledgable) author(s) --
Howard Stanislevic, et. al. -- have taken this into consideration
and tried to simplify the discussion for better digestion by
regular nontechie types.
I appreciate the motive.
All around us are demands for dumb-down:
"tell it to me in a headline with words of one syllable".
Yet communications "experts" as well as many other
kinds of folks with this attitude (legislators, election officials,
reporters, lawyers, judges...) are, in the long run,
threats to our cause.
This results in a communication that is unsatisfactory --
for both "headliners" and "techies" alike. The worst of both
worlds. A trap into which we should not step.
The most important point in the article(s) referenced here
is simply the report of unacceptably low reliability standards,
applied by statute and/or administrative rule.
Just plain badness.
This is a clear information chunk that we can all understand:
Existing reliability standards (such as those established by statute)
allow such poor performance and are so poorly conceived that
they are worthless in support of democracy.
A bad joke on voters.
The rest of the information, however cobbled together for
non-techie consumption, however done in good faith, tends
to leave wrong impressions, leaves out key concepts, and
basically misdirects a regular reader's attention.
Folks who see formulas, such as those presented, may be tempted
to apply them to their homemade data sets. I understand how this
could
be done with the best of intentions. But these are the cobblestones
on
the road to hell!
Heaven forbid ersatz "technical" stuff gets set
(gamed?) into legislation.
Folks who are truly knowledgeable about reliability metrics and
testing
will start asking about the detail devils:
For examples, Availability, Serviceability/MTTR, "Performability",
tested relaibility vs.
"parts-count" reliability, accelerated testing, environmental/stress
testing, tracking
field failures, reliability growth, making predictions based on
acquiring and fitting
good empirical data using appropriate models, perhaps "MTTFF" (TFF= to
*first* failure),
and more broadly how to create reliability test plans and acceptance
criteria truly
appropriate to the voting setting (something I've *never* seen, either
from governmental
standards/requirements people or from vendors).
The point:
-> Let's welcome and support an empirical -- evidence based -- approach,
such as
Stanislevic's.
I think our local vote groups should concentrate on this way of making
arguments
(based on *evidence*) and using good data to set standards that are
emblematic
of "what we want". Powerful long-term contributions to our cause.
Including doing research and testing and distribution of
findings as needed.
-> Let's *not* get sucked into ill-conceived dumb-down factoids
like
"only 30% as reliable as an incandescent light bulb".
Sounds cute, but actually says little.
And diverts attention from what really needs to be said.
The danger is to get the techncial knowledge -- the evidence -- of
our
entire movement dragged down into the media mud as a battle of the
ersatz one-liners, and let such glib claptrap allow us to be
manipulated
and abushed into sound and fury dumbed-down debates...
leaving the impression somehow that
the bad guys are right
and we are wrong!
Let's not let that happen.
What we are doing is too important for that.
What do you think?
Stith
|