Caution Gets the Cold
Shoulder
I sometimes find
myself receiving the cold shoulder because something I have been asked to do
requires a bit more investigation and understanding before considering
implementation.
One such situation
came up towards the end of last week. A deployment from a utility set was being
knocked out by a security policy. We did not realise this was an issue until an
urgent investigation for actionable data could not be completed, as the tracker
had already been removed by that same policy.
In layman's terms,
take, for instance, a sophisticated radio jamming implementation that stops all
mobile phones from communicating, except for selected phones with particular
identities.
One essential phone
is then brought into the environment, but not exempt from the jamming signal;
it might appear to operate, but it goes blank when a call is about to be made.
This is not entirely accurate in technical terms, but it paints the picture of
what the situation was.
A Policy in the Way
We had blocked
everything except for select elements, and another system was sending out an
element that was not on that list. The element was installed, but within a set
timeframe, it was removed because it was not on the list.
Obviously, this put
my colleague in a quandary. They had to explain why information they assumed
would always be retrievable was suddenly unavailable, and this stymied the
investigation another team was trying to commence.
In the broader scheme
of things, there was always a security policy, but for investigatory purposes,
the tool needed an exemption to allow it to install and remain installed. The
end-to-end facilitation chain had not been engaged, and hence, the failure of
intent at that stage.
Rushing Ahead Without
the Facts
The obvious next step
was to remediate the issue by allowing the tool to install, but neither of us
had full knowledge of the facts of what other parameters it needed
to perform as required.
While my colleague
wanted to rush out a fix, I was not convinced we had the right one. We had some
knowledge of what should be done, but no guarantee it would work. In cases such
as this, I would find a subset of users and/or devices to test the premise on,
ascertain that everything works as intended, and then implement it under change
management processes.
However, to my
colleague, I was impeding the process and stalling rather than being proactive,
despite my concerns and feeling that we did not have sufficient information to
proceed.
Their next act was to
extricate themselves from the communication chain, leaving me to face the
pressure of urgent implementation without the full set of data required to have
the confidence that we were doing the right thing.
Right the First Time
Earlier today, I
gained some clarity on the fundamentals of the implementation, including what
the sources were and where the conflicts occurred. With this, I was
sufficiently informed to test the premise of my findings and, beyond that, gain
the full information needed to fix the problem once and for all.
I recognise that I
could be pedantic, and at times, some have suggested I am a perfectionist, which
I would immediately deny. I am thorough, sometimes quite particular and
meticulous; it is simply the nature of the responsibility this job carries.
An accidental
deployment can so easily close down a business, and whilst this particular
activity does not carry such a critical risk, there is one thought to always keep
in mind.
I'd rather do it
right the first time, even if it takes longer, than rush it now and have to fix
the issues that arise because I did not devote the necessary time to
understanding what was involved. For that reason, I make no apology. The world
is not ending; it is impatience clouding better judgement.