Risk Management Tools

FEMA, <sarcasm> that haven of competence and efficiency we’ve all come to know and love, </sarcasm> has made life even more difficult. It seems their DNS settings were hosed for some time, including lacking a MX record and then not actually having an SMTP server responding on the fema.gov host. Now, it appears that the problem has been fixed, per the following:

fema.gov
    primary name server = ns.fema.gov
    responsible mail addr = root.ns2.fema.gov
    serial  = 2005091501
    refresh = 10800 (3 hours)
    retry   = 3600 (1 hour)
    expire  = 604800 (7 days)
    default TTL = 1800 (30 mins)

which makes sense, as one of the things Farber found was that ns2.fema.gov did in fact respond with an SMTP server. But how long was it broken? More important, what role does (or should) email play in emergency management? The investigation started when the Wall Street Journal reported that other government departments — like the Department of Health — could not reach anyone at FEMA during the response to Hurricane Katrina, as they were attempting to send them needed briefing information, instead the information was kept private, when they should have been protecting our assets. 

 

If your organization can’t be reached externally via email, whether from incompetence from the system administrators or due to a security incident, what could you be missing? And why wouldn’t you notice yourself?! How does email or other electronic communications figure in your security response plan — what happens when it’s unavailable? Questions we all have to ask ourselves, both in our own organizations and as citizens.

 

I ran across an article from O’Reilly on ending wars between testers and programmers by Scott Berkun, though why it’s on MacDevCenter is not at all clear to me. The article is actually focused on overall software testing (e.g. quality assurance), but the principles espoused here apply equally well to security planning and testing.

Berkun reviews the typical outlooks of programmers (”given enough time, I can build anything”) and testers (”everything has flaws and I can prove it”). Security folks often have the latter mindset as well, though old-school hackers are really a mix of the two. Polarization typically results in most IT organizations, and Berkun offers a number of suggestions.

 

First is matching responsibility with authority. This is a common management problem — testers (security or otherwise) are the first to get called on the carpet when something goes wrong, but on the front end their concerns are ignored. Even if there’s an ostensible process, exceptions are frequently made because development is already late. A similar problem exists with system administrators, who are dinged for application downtime even if the application is at fault, including in-house apps. When this disparity exists, the real group at fault is management. Responsibility without authority is just creating a scapegoat and won’t actually do anything to solve the issue at hand or prevent future ones.

 

He also suggests “early partnerships”. In the world of security, this means that security should be brought in as early in the process as possible, a fundamental tenet of security architecture. Again, good design and development practice is not really different between security and quality.

 

That leads to the third point, defining quality. While the article only offers vague suggestions in this area, he does point out the need for static requirements and test cases. This is a controversy from time immemorial — changing specifications. Technical folks instinctively know this, but management rarely does.

 

His final point is one of the most interesting: If problems exist, they’re likely caused by leaders. Politics filters down. And the problem isn’t just management, it’s also senior stafff. If lead architects view testing and security with disdain, and senior security staff thinks that the development side of the house doesn’t have a clue, neither group is going to work well with the other. But the individuals in those positions can’t just have technical skills; they need to have gotten As in “plays well with others”.

 

The article offers some more concrete suggestions in each area, but readers need to flesh it out more. Personally, I’ve believed for a long time that security groups have a lot to learn from established software test and QA practices; this article just reinforces that belief.

 

 

Get Secure

Hah! I have embedded HTML in this blog entry that has compromised your system. It’s smart enough to attack multiple platforms (including Windows and Linux) and gives me command-line administrative level access. Don’t believe me? 

Heh. You just lost your privacy.

E-mail: kmax123@mail.com

Fly For A Wifi

While there are lots of dumb ways to secure wireless networks, here are the six dumbest. Read the article, there are a lot more details there.

  1. MAC filtering — if you really thought this was useful, you have no business calling yourself a security admin
  2. SSID hiding — a few more methods to get the SSID are listed here than I knew
  3. LEAP authentication — ditto
  4. Disable DHCP — I had no idea people thought that this was a security mechanism
  5. Antenna placement — He makes some good points but there’s still some value in some environments, I believe
  6. 802.11a/Bluetooth — — same as #4
Print Print | Sitemap
© Kyle Maxwell