SECURE SOFTWARE DEVELOPMENT LIFE CYCLE
| VERIFICATION AND TESTING PHASE (SDLC)
VERIFICATION AND TESTING PHASE
OF THE SECURE SDLC
VERIFICATION & TESTING PHASE
Security checks and assessments are typically consigned to the verification and testing phase of traditional SDLC. A Secure SDLC contrasts greatly with traditional SDLCs by implementing security at every step of the development life cycle, such that security is built into the core foundation of the software, which leaves the verification phase of the SSDLC as an extension of previous security functions. It is first important to define what verification testing means, as there is a vast difference between “verification” and “validation” in the software testing industry. As noted by Software Testing Fundamentals, while verification testing is used to determine whether products that are still in development are meeting the original requirements by working correctly, validation is used post-production to determine whether the application meets the end-user’s needs and is thus the correct software being implemented for the purpose of fulfilling an original objective.
That said, properly performing security procedures during the previous phases of the SSDLC - including security peer reviews, threat modeling, risk analyses, secure development practices such as secure coding, and static analysis during the development phase - should result in many security vulnerabilities being eliminated before even reaching this phase.
However, many vulnerabilities can still slip through previous testing, so verification testing is still a significant phase in the SDLC. It is necessary to complete numerous, continuous security assessments in order to fully identify any and all vulnerabilities that exist in the application’s code. Verification testing can innately encompass many different procedures and protocols. Traditional testing during this phase often entails GUI acceptance testing using Selenium, conducting performance and load testing, testing API calls, performing API functionality and integration tests, etc. High level smoke testing is often performed to ensure that the system is live and responding correctly. Smoke testing is done to ensure that critical functionality of the application is in place and working correctly, and is typically used as a basic, initial test to establish the feasibility of further testing. These tests are often done with each successive build to determine whether key functions, modules, and components are working correctly.
RUN TIME VERIFICATION CHECKS
TO MONITOR APPLICATIONS FOR SECURITY FLAWS
RUN TIME VERIFICATION
This includes automated testing of security functionalities, i.e. Authentication, authorization, session management, encryption, access control, validation, and encoding. An example includes whether a user with a standard (non-admin) account should only have access to user-level pages and APIs. Automated testing should ensure that said user does not have administrative access or access to management sections of the system, and that any attempt to access such root systems results in an “unauthorized” error. The utilization of automated testing is pivotal in ensuring that all application procedures for user access to correct endpoints work as expected. Automated non-functional testing is also used during this phase to verify secure server configuration and that TLS is securely configured for proper encryption of data in flight.
DYNAMIC ANALYSIS OF THE APPLICATION
BEFORE RELEASING FOR QA TESTING
DYNAMIC ANALYSIS OF THE APPLICATION
AUTOMATED DYNAMIC SCANS IN THE NIGHTLY BUILDS (CI PIPELINE)
It is also important to note the difference between verification security testing and QA testing; while QA testing focuses on using “positive” inputs to determine whether a positive, expected output results, fuzzing and similar tests focus on utilizing malicious inputs to establish whether an attacker’s inputs could have malicious results. While fuzzing operates by using random, trial-and-error inputs against a system, blackbox testing seeks to probe a system without knowledge of the source code or inner workings of the software, and closely simulates the experience of a real attacker who seeks to hack an application or system without intimate knowledge of the architecture or code. While using fuzzing and blackbox testing are critical in verification testing, dynamic automated scans are equally important to find as many vulnerabilities in a software application as possible. At the same time, there are some very important protocols that should be followed due to the nature of dynamic scans. Primarily, full dynamic scans can take several hours or even days for large systems, thus it is imperative to utilize the practice of conducting nightly tests and CI (continuous integration) tests to optimize workflows associated with the CI pipeline. This allows for detailed testing of software builds, swift procurement and correction of bugs that are identified, and rapid feedback associated with new builds that are to be allocated to code repositories.
As mentioned above, certain tests can take long periods of time to successfully complete and thus should be carried out during nightly builds. Such tests include long running dynamic scans and fuzzing tests, etc. Depending on how long such tests may take, it may also be more productive to periodically conduct such tests outside of the build pipeline. With an efficient CI pipeline in place, results can be fed back to engineers using ticket systems (e.g. JIRA, Git, etc.) as tasks and builds finish, which eventually result in a complete feedback loop for all identified issues for the software in production. The next step in the SSDLC is the release phase, which often includes maintenance and support of the application.