Redundancy/Orthogonality Maxim: When different security measures are thought of as redundant or “backups”, they typically are not. Comment: Redundancy is often mistakenly assumed because the disparate functions of the two security measures aren’t carefully thought through. Compiled by Roger G. Johnston, Ph.D., CPP, Argonne National Laboratory
Depth, What Depth? Maxim: For any given security program, the amount of critical, skeptical, and intelligent thinking that has been undertaken is inversely proportional to how strongly the strategy of "Security in Depth" (layered security) is embraced. Compiled by Roger G. Johnston, Ph.D., CPP, Argonne National Laboratory
A Friday Classic just for you, this time, plucked with demonstrable glee from the ad-ridden pages of Wired (to be fair, I am a print and digital subscriber as well...), and written by Matt Honan. In which, the eponymous Mr. Honan reveals his troubled relationship with passwords (and those that wish to abscond with same). Today's Must Read Classic post from November of 2012.
The National Cyber Security Hall of Fame revealed the class of inductees last week, Fittingly, the old guard - as it were - is well represented. Interestingly, no equivalent announcement (or banquet) has been announced for the Class of 2015's counterparts in opposition, so to speak.
Adrian Lane, writing at Securosis (Adrian is also an Analyst & CTO, of the company)has published a timely piece targeting the deep inclusion of security into DevOps. Today's Must Read, and one of many from the folks at Securosis, a snip appears below. Ladies and Gentlemen, Girls and Boys, without further ado, Mr. Lane writes (sorry, one more ado... Note the last bulleted point in Mr. Lanes' snippet below):
Reduced errors: Automation reduces errors that are common when performing basic – and repetitive – tasks. And more to the point, automation is intended to stop ad-hoc changes to systems; these commonly go un-recorded, meaning the same problem is forgotten over time, and needs to be fixed repeatedly. By including configuration and code updates within the automation process, settings and distributions are applied consistently - every time. If there is a incorrect setting, the problem is addressed in the automation scripts and then pushed into production, not by altering systems ad-hoc.
Speed and efficiency: Here at Securosis we talk a lot about ‘reacting faster and better’, and ‘doing more with less’. DevOps, like Agile, is geared towards doing less, doing it better, and doing it faster. Releases are intended to occur on a more regular basis, with a smaller set of code changes. Less work means better focus, and more clarity of purpose with each release. Again, automation helps people get their jobs done with less hands-on work. But it also helps speed things up: Software builds can occur at programatic speeds. If orchestration scripts can spin up build or test environments on demand, there is no waiting around for IT to provision systems as it’s part of the automated process. If an automated build fails, scripts can pull the new code and alert the development team to the issue. If automated functional or regression tests fail, the information is in QA or developers hands before they finish lunch. Essentially you fail faster, with subsequent turnaround to identify and address issues being quicker as well.
Bottlenecks: There are several bottlenecks in software development; developers waiting for specifications, select individuals who are overtasked, provisioning IT systems, testing and even process (i.e.: synchronous ones like waterfall) can cause delays. Both the way that DevOps tasks are scheduled, the reduction in work being performed at any one time, and in the way that expert knowledge is embedded within automation, once DevOps has established itself major bottlenecks common to most development teams are alleviated.
Cooperation and Communication: If you’ve ever managed software releases, then you’ve witnessed the ping-pong match that occurs between development and QA. Code and insults fly back and forth between these two groups, that is when they are not complaining about how long it is taking IT to get things patched and new servers available for testing and deployment. The impact of having operations and development or QA work shoulder to shoulder is hard to articulate, but focusing the teams on smaller set of problems they address in conjunction with one another, friction around priorities and communication start to evaporate. You may consider this a ‘fuzzy’ benefit, until you’ve seen it first hand, then you realize how many problems are addressed through clear communication and joint creative efforts.
Technical Debt: Most firms consider the job of development to produce new features for customers. Things that developers want – or need – to produce more stable code are not features. Every software development project I’ve ever participated in ended with a long list of things we needed to do to improve the work environment (i.e.: the ‘To Do’ list). This was separate and distinct from new features; new tools, integration, automation, updating core libraries, addressing code vulnerabilities or even bug fixes. As such, project managers ignored it, as it was not their priority, and developers fixed issues at their own peril. This list is the essence of technical debt, and it piles up fast. DevOps looks to reverse the priority set and target technical debt - or anything that slows down work or reduces quality - before adding new capabilities. The ‘fix-it-first’ approach produces higher quality, more reliable software.
Metrics and Measurement: Are you better or worse than you were last week? How do you know? The answer is metrics. DevOps is not just about automation, but also about continuous and iterative improvements. The collection of metrics is critical to knowing where to focus your attention. Captured data – from platforms and applications – forms the basis for measuring everything from tangible things like latency and resource utilization, to more abstract concepts like code quality and testing coverage. Metrics are key to know what is working and what could use improvement.
Security: Security testing, just like functional testing, regression testing, load testing or just about any other form of validation, can be embedded into the process. Security becomes not just the domain of security experts with specialized knowledge, but part and parcel to the development and delivery process. Security controls can be used to flag new features or gate releases within the same set of controls you would use to ensure custom code, application stack or server configurations are to specification. Security goes from being ‘Dr. No’ to just another set of tests to measure code quality. - via Adrian Lane writing At Securosis
Over time the last bulleted point on Security, is a concicise description of both functional and technical answers (and a sea change of inclusion in optimized DevOps environs) to the often observed-by-developers 'No, you cannot do that' perception of information security team members... Most certainly Todays' Must Read. Enjoy.
Via Sean Gallagher, writing at Ars Technica, comes this outstanding screed targeting a new Google Inc. (NasdaqGS: GOOG) robotic working canine at the United States Marine Corps Combat Development Command situated at USMC Base Quantico. If you read anything today about military robotics, read Mr. Gallagher's piece. Absolutely Outstanding.
Takes One to Know One: The fourth most common excuse for not fixing security vulnerabilities is that "our adversaries are too stupid and/or unresourceful to figure that out." Comment: Never underestimate your adversaries, or the extent to which people will go to defeat security. Compiled by Roger G. Johnston, Ph.D., CPP, Argonne National Laboratory
Hopeless Maxim: The third most common excuse for not fixing security vulnerabilities is that "all security devices, systems, and programs can be defeated". Comment: This maxim is typically expressed by the same person who initially invoked the Mermaid Maxim, when he/she is forced to acknowledge that the vulnerabilities actually exist because they’ve been demonstrated in his/her face. Compiled by Roger G. Johnston, Ph.D., CPP, Argonne National Laboratory
Really has to be read to beleive it... This weeks' evidence that stupidity is most certainly alive and well in the network hardware business points to the geniuses at D-Link and their publishing of the company's code-signing key - publicly.
"The key expired earlier this month, but Klijnsma said that any software that was signed before the expiration date will continue to be accepted as a legitimate D-Link release. He said the key is accepted by Microsoft Windows code-signing requirements and appears to be accepted by Apple's OS X as well. The security analyst said he has reported the leaked key to officials at Symantec, the security firm that oversees the certificate authority that validated the D-Link key, in hopes of getting it revoked. It's unclear if or when that revocation may happen." - via Ars Technica's Dan Goodin