Wednesday 2 July 2008

Extreme Programing Document - terms

Scope and Objective
The purpose of this document is to provide a development methodology based on Ex-treme Programming (XP).

Prerequisites
Developers should be familiar with software development lifecycles and methodologies, Extreme Programming in particular. Customers should be familiar enough with the func-tional domain to be able to clearly describe the behavior of the system being developed.

Definitions
Class Diagram A graphical representation of the structure of a system or subsystem (as described in the UML).
Collaboration Diagram A graphical representation of the relationships between system components that interact to perform a task (as de-scribed in the UML). A system component may participate in more than one collaborations.
Customers Users or Representatives (such as product managers or business analysts).
Deployment Installation of software into a particular environment (such as build, test, or production).
Entity-Relationship Diagram A graphical representation of a database schema.
Ideal Hours An estimate of development effort under perfect conditions.
Iteration A time period of one to four weeks containing a subset of the stories in the release.
Metaphor A short description of the vision of the product and each re-lease, used to develop a shared understanding among the members of the Project Team.
Pattern A reusable design element. In the UML, a pattern is repre-sented as a parameterized collaboration.
Project Team Developers and Customers, may also include QA person-nel.
Promotion A deployment from one environment to another (such as from build to test).
Spike A research-oriented story estimated on the basis of time needed to find a solution. Additional time may be needed for implementation.
Sponsor The person providing resources for the project.
Story A functional story describes the user experience. A techni-cal story may be a refactoring project (to facilitate new sto-ries or simplify existing code), a prerequisite to develop-ment (setting up an environment or installing a database), or an incidental task (writing an interface to third-party ap-plication or defining an API).
UML Universal Modeling Language- a standard for visualizing, specifying, constructing, and documenting software sys-tems.
Use Case A requirements analysis technique that describes a user’s interactions with the system and the application’s re-sponses.
Velocity A measure of the speed of the development team, relative to the estimates in ideal hours. A velocity of 2.0 means that development effort was twice as much as expected (the target velocity is 1.0, which indicates unbiased estimates).
Δ Greek symbol Delta, used to indicate a difference from the traditional XP methodology (per sentence), for purposes of ISO 9000 compliance, CMM implementation, and/or pro-ject visibility.
Requirements
Requirements Analysis
Requirements are defined in terms of functional stories. Customers will define the inter-action between the user and the system for each functional story. Each story will be identified with a name and a number. Δ Stories will be documented in the form of use cases (see Appendix A) and maintained in the business unit’s software development life cycle management system.

Release Planning begins when Customers decide that enough stories have been de-fined for a release. Additional use cases may be created may be created during release planning, if necessary.

Requirements Review
The Project Team reviews the use cases as a group during Release Planning and makes amendments as necessary. The development team and Customers may also meet at any time during design and development if they need clarification on the ap-pearance or behavior of the system.

Requirements Sign-Off
Δ Once assigned to a release, use cases are signed off for understanding by the Cus-tomer and assigned developers using the business unit’s software development life cy-cle management system.


Requirements Changes
Any project team member or other person may request a change to the stories assigned to a release. The results of a spike may also require changes to the original story or re-factoring of other parts of the system.

Δ Proposed changes (including adding or removing stories to a release) will be entered into the business unit’s software development life cycle management system. Changes must be approved by the Customer, assigned developers, and the QA lead for the pro-ject, and must be reviewed and a decision made within two business days (unless there are extenuating circumstances such as illness or vacation, which must be noted on the request). The use cases are modified as changes are approved.

Release Planning
All members of the Project Team are involved in Release Planning. At the start of Re-lease Planning, a metaphor for the release (or product, for a new product) is created by the Project Team to articulate the theme for development of the system.

Prioritization
In addition to the functional stories defined by the Customers, additional technical stories may be necessary, which are described by the development team. If necessary, stories may be split or merged to organize them in a meaningful way and/or decompose large projects into manageable parts. Customers will prioritize each story.

Estimation
Tasks needed to implement each story are defined, and estimates in ideal hours are de-veloped for each task, which are aggregated at the story level. If a plausible estimate cannot be developed for a story (for example if too much about the functionality is un-known or a prototype is required for technical feasibility), a spike will be assigned.

Δ For new products, estimates will be developed for all stories needed to create a sal-able product, even if they may be delivered in several interim releases, in order to per-form a value analysis. Each subsequent release will be planned independently.

Customers may reprioritize stories based on the estimates provided. Developers may revise their estimates based on changes made to the stories by the customers. If the team’s velocity is known, all estimates will be adjusted in order to obtain the expected number of effort hours.


Story Assignment
Customers assign stories to the release based on the estimates and the constraints of the release (for example, if the release schedule is fixed, this will be the number of de-veloper hours available in the release timeframe). XP releases are typically three months or less in duration. Stories will be assigned to iterations within the release and the num-ber and duration of iterations will be determined.

Δ Stories not assigned to the release are inventoried using the business unit’s software issue tracking system. This inventory will also be used to track product enhancement re-quests.

Project Plan Creation
Stories, tasks, and estimates are logged in the business unit’s software project man-agement system. Team members sign up for tasks and leveling is performed to ensure that all tasks are assigned and that all team members have a reasonable workload.

In order to spread knowledge of system components from the original authors to the rest of the development team, coding assignments will pair a developer who is familiar with the area being modified with someone who is not, where such a situation exists (and there is more than one developer on the project team).

Project Plan Review and Sign-Off
Δ The baseline plan is reviewed and a sign-off is performed by all project team members to indicate understanding, using the business unit’s software development life cycle management system.

Project Plan Amendments
Using the business unit’s software project management system, the plan is adjusted when appropriate to reflect the current status. Reports will be used throughout the re-lease to determine the status of the release in terms of progress against the schedule. At the end of the release the actual numbers will be used to determine the team’s velocity.



Design
Design Analysis and Documentation
Design is performed concurrently with development. Δ The high-level architecture for each subsystem will be documented in class and collaboration diagrams and an entity-relationship diagram will be created for the database schema, which will be updated as the design changes.

Class diagrams will reflect the static structure of the system, including inheritance and ownership. Collaboration diagrams will be used to document the interactions among sys-tem components. Patterns will be documented for collaborations that are used through-out the system.

Other design documentation may include video recordings of design sessions and im-ages captured from whiteboards. Δ Electronic design artifacts will be stored in the busi-ness unit’s software development life cycle management system and physical design ar-tifacts (such as videotapes) will be retained by the project manager.

Design Review
Δ Modifications to the design will be reviewed at the end of each iteration and results will be recorded in the business unit’s software development life cycle management system.

Development
Coding
All code will follow the business unit’s coding standard(s) for the programming lan-guage(s) used for the project. Since system documentation is at a high-level, header and in-line comments must fully document the intent and purpose of the code.

Where the programming language supports the generation of technical documentation from source comments (such as javadoc), such features will be used.

Code Review
Code reviews are performed continually through the use of pair programming. Since pair development functions as the code review, all work is required to be done in pairs (or teams, which may be necessary when coaching new project members). Code that is de-veloped individually will be reviewed according to the process defined in the business unit’s Software Development Methodology.

Project Team members may also meet to review code at any time for educational pur-poses.

Unit Testing
Unit Test Creation
Unit tests are written throughout the development cycle. The initial set of unit tests is based on the developers’ understanding of the requirements of the story, and are modi-fied as changes are made and problems are found.

Whenever possible, unit tests are to be automated for reusability. Tests that require a system failure that is not reproducible, such as the loss of network connection, may need to be performed manually. However, manual tests are to be written only as a last resort. In cases where it is not possible, feasible, or economical to create automated unit tests and automated system tests can be developed (for example, when verifying confor-mance to GUI standards), the system tests will be created during development and also function as unit tests.

Unit tests are to be thorough. The use of code coverage tools may be necessary to de-tect dead code blocks and determine that all possible conditions are tested.

Note: unit tests are not necessary for types of errors that will be caught by the compiler being used for the project.

Unit Test Review
Since code will be written to implement unit tests, all tests must also be written in pairs. Individually written unit tests will be reviewed according to the process defined in the business unit’s Software Development Methodology).

Unit Test Execution
All unit tests must execute successfully before integration. Δ Logs containing the results of automated unit tests will be retained on the server executing the tests in a directory specified by the tool being used. Δ Manual unit test results will be stored in the business unit’s software development life cycle management system.

Integration
All application artifacts will be checked out for modification, with strict locking (only one person at a time will be able to modify a particular file). Each developer will have a sandbox to contain a copy of the development environment, including their checked-out files. Δ A comment indicating the story name and number will be entered for traceability when artifacts are checked-in.

Upon check-in of source code, a build will be performed (either automatically or manu-ally, depending on the availability of tool support for the particular development environ-ment). The complete set of all automated unit tests will be run after each successful build.

If the build is unsuccessful or there are errors reported by the unit tests, the developers who performed the check-in will update the code and unit tests as necessary until all problems are resolved.

Deployment
Promotions are performed at certain points in the release cycle in order to deploy the system to various environments. In order to prepare for a promotion, the developer will perform a build of the system and run the unit tests (as described in integration) and checkpoint and label the source code and other configuration items (such as database scripts needed for the release) using the business unit’s configuration management sys-tem. Δ Each promotion will be logged in the business unit’s software development life cycle management system.

During a promotion the executable code and any necessary files (such as data files, help files, and configuration files) are copied from the source to the destination environment. Any necessary changes are also made to the database structure of the destination envi-ronment.

At the end of each iteration, the system will be promoted from the build to the test envi-ronment. Customers may also request a promotion to test within an iteration.

At the end of the release the system will be promoted from the test to the staging envi-ronment. The staging environment is used for product demos and for preparation of the packaging materials for pilot and production installation.

Packaging will be treated as a separate technical story for each release, undergoing all of the development and testing activities in this process. The packaging story must cover new installations and upgrades if applicable.

System Testing
Acceptance Testing
At the end of the last two iterations in the release Customers will have the opportunity to validate that the system implements the stories as expected.

Customers write acceptance tests for each functional story while design and develop-ment is in progress by using the stories to develop test cases. The test script for a story will include verification for the system’s response to each user action. Δ Test scripts will be stored in the business unit’s software development life cycle management system.

After a promotion to the test environment, customers will execute the test cases and will indicate the results in the test scripts. Δ The completed test scripts will be stored in the business unit’s software development life cycle management system.

Regression Testing
During System Testing, Customers will also ensure that changes made for the current release have not caused adverse side effects to existing functionality using test scripts from prior releases. Δ The completed test scripts will be stored in the business unit’s software development life cycle management system.

Δ If the project team has technical QA resources, the test scripts for the stories in each release will be automated after it is promoted to production. Δ The automated scripts will be executed against subsequent promotions in place of manual regression testing, and the logs of test results will be retained on the server executing the tests.

Δ Pilot Testing
If the Customers on the project team are representatives rather than actual users (or the Customers may not be representative of the entire user population), one or more pilot installations will be performed in order to validate the system under expected production conditions. Users involved in the pilot will perform acceptance and regression testing. Defects discovered during pilot testing will be reported to the Customers and tracked the same as problems found internally.

Defect Tracking
Defects discovered during and after System Testing will be logged in the business unit’s software issue tracking system and tracked until closure. If an error is discovered in a manual or automated test script, it will be logged as a defect and the modified test case(s) and any others that would be affected by the change will be re-executed.

Release Evaluation
Any problems discovered during System Testing must be resolved unless Customers agree to enter a project for the issue in the business unit’s software issue tracking sys-tem for resolution in a later release.

The Customers on the project team, QA Lead, and project Sponsor will determine when the application is ready to be released. Δ Approval for release will be performed via the business unit’s software development life cycle management system.

External customers and other stakeholders such as the Customer Support, Deployment, Implementations, and Training groups will be notified in advance of the expected release date. Each external customer will determine when the application is ready to be installed at their site.


Δ Lessons Learned
After the release is complete, the Project Team will meet to review the project metrics and any other relevant observations. The results of the meeting, including suggestions for improvement, will be tracked and monitored using the business unit’s continual im-provement process.

Records

Description Location
Project Plan Business unit’s software project management system
Project Plan Approvals Business unit’s software development life cycle manage-ment system
Project Requests Business unit’s software issue tracking system
Use Cases Business unit’s software development life cycle manage-ment system
Use Case Approvals
(new and modifications) Business unit’s software development life cycle manage-ment system
Class and collaboration diagrams Business unit’s software development life cycle manage-ment system
Design Approvals
(new and modifications) Business unit’s software development life cycle manage-ment system
Unit Test Results Business unit’s software development life cycle manage-ment system (for manual tests)
Development server (for automated tests)
Acceptance Test Scripts Business unit’s software development life cycle manage-ment system
Acceptance Test Results Business unit’s software development life cycle manage-ment system
Regression Test Scripts Business unit’s software development life cycle manage-ment system (for manual tests)
Test server (for automated tests)
Regression Test Results Business unit’s software development life cycle manage-ment system (for manual tests)
Test server (for automated tests)
Pilot Test Results Business unit’s software development life cycle manage-ment system
System Test Defects Business unit’s software issue tracking system
Release Approval Business unit’s software development life cycle manage-ment system


Revision History

Revision Date Section Revised Description of Revision
April 28, 2003 All First Published Revision










Requests for revisions to this document should be submitted to the CCS Product Devel-opment Manager, who is the owner of this document.

Appendix A: Documenting Requirements with Use Cases

Use cases are a technique for documenting the interaction between the user and the system. As such, they are a useful for recording stories in a structured form (similar to a story card). Use cases may be presented in descriptive, tabular, or diagram format. The format to be used will be determined for each project.

Example 1 – Descriptive Format:

The descriptive format is a narrative of the behavior of the system in response to user actions. A descriptive use case should contain the same information as one in tabular format.

After the user launches the application, the logon screen is displayed. When the user types their id, it is displayed in the field. When the user types their pass-word, it is masked with asterisks. When the user presses the login button, the system verifies the user’s credentials. If successful, the application’s splash screen is displayed and the user ends up on the system’s main screen. If the user id and/or password is incorrect, an error message is displayed and the ap-plication shuts down when the user presses the OK button to acknowledge the message.

Example 2 – Tabular Format:

Use cases in tabular format contain the following elements:
• A brief description of the purpose of the use case.
• The normal and alternative flows of control (stimulus/response sequences) between the system and the user, including any reusable sub-flows for the use case.
• Preconditions and post-conditions.


Story #: 123 Story Name: Login to application
Description: Authenticate the user’s credentials and initialize the application
Preconditions: 1. The application has been launched.
Base Flow
User System
Enter user id Display user id in field as typed
Enter password Asterisks display in password field as typed
Press the login button Verify user
Display splash screen (if successful)
[Alternate Flow 1] (if failure)
Alternate Flow(s)
User System
[Alternate Flow 1] Display error message
Press OK Perform shutdown
Post-conditions: 1. The main system screen is displayed (if successful)
2. The application exits (if failure)

No comments: