Verification and testing is an integral part of the software development life cycle (SDLC) in that it typically is the phase where software products are evaluated to determine whether they run as intended and meet user and customer needs.
Security checks and assessments are typically consigned to the verification and testing phase of traditional SDLC. A Secure SDLC contrasts greatly with traditional SDLCs by implementing security at every step of the development life cycle, such that security is built into the core foundation of the software, which leaves the verification phase of the SSDLC as an extension of previous security functions. It is first important to define what verification testing means, as there is a vast difference between “verification” and “validation” in the software testing industry. As noted by Software Testing Fundamentals, while verification testing is used to determine whether products that are still in development are meeting the original requirements by working correctly, validation is used post-production to determine whether the application meets the end-user’s needs and is thus the correct software being implemented for the purpose of fulfilling an original objective.
That said, properly performing security procedures during the previous phases of the SSDLC - including security peer reviews, threat modeling, risk analyses, secure development practices such as secure coding, and static analysis during the development phase - should result in many security vulnerabilities being eliminated before even reaching this phase.
However, many vulnerabilities can still slip through previous testing, so verification testing is still a significant phase in the SDLC. It is necessary to complete numerous, continuous security assessments in order to fully identify any and all vulnerabilities that exist in the application’s code. Verification testing can innately encompass many different procedures and protocols. Traditional testing during this phase often entails GUI acceptance testing using Selenium, conducting performance and load testing, testing API calls, performing API functionality and integration tests, etc. High level smoke testing is often performed to ensure that the system is live and responding correctly. Smoke testing is done to ensure that critical functionality of the application is in place and working correctly, and is typically used as a basic, initial test to establish the feasibility of further testing. These tests are often done with each successive build to determine whether key functions, modules, and components are working correctly.
While security assessments, security analyses, and peer code reviews are done during previous phases (such as the development phase), it is often the case that vulnerabilities are missed during such phases. The verification testing phase is a step encompassing a more thorough, robust testing procedure, often during run-time in order to more comprehensively monitor software applications for security holes.
This includes automated testing of security functionalities, i.e. Authentication, authorization, session management, encryption, access control, validation, and encoding. An example includes whether a user with a standard (non-admin) account should only have access to user-level pages and APIs. Automated testing should ensure that said user does not have administrative access or access to management sections of the system, and that any attempt to access such root systems results in an “unauthorized” error. The utilization of automated testing is pivotal in ensuring that all application procedures for user access to correct endpoints work as expected. Automated non-functional testing is also used during this phase to verify secure server configuration and that TLS is securely configured for proper encryption of data in flight.
The use of lightweight, dynamic scanning and testing software is typically introduced in the verification phase of the Secure SDLC in order to scan for known vulnerabilities. While static code analysis generally scans the code and searches for known security holes, dynamic tools can help determine whether the application implementation is secure. In order to be efficient and productive such scanners need to run quickly, not require too much maintenance or overhead, and should result in a low number of false positives. The latter is a very significant point as a scanner loses its efficiency when engineers receive data indicating that multiple security holes exist where there are none. There are many powerful, lightweight, and efficient scanning tools that exist, such as ZAP and Arachni, which can operate in conjunction with continuous integration testing tools such as Gauntlt, BDD Security, and Selenium, to automate a dynamic scanner in head
There are many valuable testing methods associated with automated scanning and testing. Blackbox testing and fuzzing are two such tests that are often conducted during the verification phase. Because vulnerabilities are exploited via user inputs - which manipulate a system to react in ways that a developer didn’t intend or expect - it is important to test a variety of inputs against a system to determine how the application can potentially be exploited.
It is also important to note the difference between verification security testing and QA testing; while QA testing focuses on using “positive” inputs to determine whether a positive, expected output results, fuzzing and similar tests focus on utilizing malicious inputs to establish whether an attacker’s inputs could have malicious results. While fuzzing operates by using random, trial-and-error inputs against a system, blackbox testing seeks to probe a system without knowledge of the source code or inner workings of the software, and closely simulates the experience of a real attacker who seeks to hack an application or system without intimate knowledge of the architecture or code. While using fuzzing and blackbox testing are critical in verification testing, dynamic automated scans are equally important to find as many vulnerabilities in a software application as possible. At the same time, there are some very important protocols that should be followed due to the nature of dynamic scans. Primarily, full dynamic scans can take several hours or even days for large systems, thus it is imperative to utilize the practice of conducting nightly tests and CI (continuous integration) tests to optimize workflows associated with the CI pipeline. This allows for detailed testing of software builds, swift procurement and correction of bugs that are identified, and rapid feedback associated with new builds that are to be allocated to code repositories.
As mentioned above, certain tests can take long periods of time to successfully complete and thus should be carried out during nightly builds. Such tests include long running dynamic scans and fuzzing tests, etc. Depending on how long such tests may take, it may also be more productive to periodically conduct such tests outside of the build pipeline. With an efficient CI pipeline in place, results can be fed back to engineers using ticket systems (e.g. JIRA, Git, etc.) as tasks and builds finish, which eventually result in a complete feedback loop for all identified issues for the software in production. The next step in the SSDLC is the release phase, which often includes maintenance and support of the application.