-
Time:
The actual time required for a product to be delivereable
-
Cost:
The amount of money required to complete the project
-
Scope:
The functional elements that make up the project deliverables.
They are dependent on each other
Example: If time reduces => Scope reduces || cost increases.
- Requirement gathering
- Analysis
- Design architecture
- Implementation
- Testing
- Requirement gathering
- For a good requirement, it needs to be:
- Consistent
- Modifiable
- Traceable
- Unambiguous
- Complete
- Verifiable (between both parties)
- Feasible (doable)
- For a good requirement, it needs to be:
- Analysis
- Document all the requirement
- Design
- Structure/database/workflow/interaction/interface design.
- Implementation
- Start coding
- Testing.
If testing fail then have to start a gain from start.
- Risk analysis
- Plan
- Evaluate
- Engineer
From 4 loop back to 1.
Replace requirement with use case
Between AGILE and Waterfall, another step from waterfall.
-
Use case driven
-
Architect based on usecase
-
Develop bit by bit rather than the whole project.
-
4 Components
- Worker
- What
- How
- When
-
Phases:
-
Inception - What to build?
-
Elaboration - How to build?
-
Construction - Build
-
Transition - Test/validate
-
Problem:
- Still based on document (less than waterflow)
- Release often (but not enough)
-
Improve use case with user stories
- how to write user stories:
- As a [...], I want to[....], so I can [...]
- how to write user stories:
-
Increment development
-
fast and flexible to change
-
continuous, frequent delivery
-
Popular:
-
Scrum
- Roles:
- Product Manager
- Scrum master
- Team member
- Ceremonies:
- Sprint planning
- Sprint review
- Sprint retrospective
- Daily scrum meeting
- Artefacts:
- Product backlog (wish list) -> Release backlog (selected for this release) -> Sprint backlog (selected for this sprint)-> Burnt down chart (How many more to do)
- Roles:
-
XP
- Not many document
- Release weekly
- Pair programming
- Only keep code, test
Compare with RUP.
-
Use case name | |
---|---|
Version | |
Goal | |
Summary | |
Actors | |
Basic course of action | |
Alternative path | |
Post condition | |
Business rules | |
Notes | |
Author & Date |
Centralised control:
- Everything pass back to one main class to process.
Distributed control:
- Information is pass between classed to process
Format:
event[guard]/action
// or
event[guard1 && guard2]/action
Event also known as Trigger.
Everything is optional.
At any state, There can only be 1 transition out that has the same event
=> Guard has to make it exclusive.
Example:
- Play[CD] vs Play[noCD]
- Quality of the test determines success
- Prevent fault/error by using early life-cycle testing techniques
- A person must be responsible for improving the test process.
- Required trained/skilled people
- Remain a positive attitude (Don’t get mad if they destroy your program)
- Error
- Fault - result from error
- Failure - result from fault
- Incident - sympton of failure
- Test case - to find failure
- Test - the action
Testing -> fault classification -> fault isolation -> fault resolution
V stands for verify (trick to remember)
Document verification | Acceptence test |
---|---|
Specification verification | System test |
Design verification | Integration test |
Coding verification | Unit test |
Test through out the development effort
Development effort:
- Planning
- Configuration
- Staffing
- Test development
- Test execution
- Plan
- Develop test case
- Run test case
- Evaluate test result
- test deliveries
- Report
- Statistics
- It’s too close to the shipping time, have to release
- Lack of time, resources.
How to determine?
- Base on density of error
- Base on frequency of test rebugs
- Number of open bugs.
white box | black box |
---|---|
Need to understand coding | Don’t need to understand coding |
Test based on functionality of the code, system related | Usability test |
Performance test | |
Stress test | |
Configuration | |
Other non-functional test. | |
- Initiation phase - Document verification
- Requirement phase - Review all the document, begin designing acceptence test
- Software architecture phase - Review the architecture design, start designing system test
- Detail design phase - Review the design document, start designing integration testing
- Implementation phase - Code inspect, design/implement unit test
- Integration and testing phase - Intergrate the tested unit
- Acceptence and transition phase - Do system testing dessigned in phase 3, black box testing
- Acceptence testing and review.
identifier | uniqueID |
---|---|
test items | components / feature being test |
input specification | |
out put specification | expected output |
environment need | |
special procedure requirement | contraint or special need |
iner-case dependencies | any dependencies (any extra libraries) |
- Not enough time
- Not real bug
- Too risky to fix
- Not worth to fix
- bug reporting is not good
- TO BE GOOD:
- non-judgement, non-personal
- well described
- following up on bug reports.
- TO BE GOOD:
title | |
---|---|
Description | |
Severity/ Priority | |
Reproduction steps | |
Expected result | |
Actual result |
- Open - Waiting to be fixed
- Resolved - Fixed, waiting to be tested
- Closed - Tested and approved
- Equivalence class testing:
- Divide test cases into different group of test case:
- < 18; 18 - 50; > 50
- Divide test cases into different group of test case:
- Boundary Value testing
- Test a specific boundary value instead of a group
- Boundary value could be the boundaries where the two value meet.
- For example:
- Testing for under 16, boundary could be
- Age = 15, 16
- Validation: to be valid
- Are we building the right product?
- Unit testing, integration testing, system testing, acceptance testing.
- Verification:
- Are we building the product right?
Involves detecting new bugs or defects.
Through changes that attempt to fix existing bugs
public class carTest {
@Before
public void setUp() throws Exception{
car = new Car(100,100,10);
}
@After
public void tearDown() throws Exception {
}
@Test
public void testMove1() {
car.move();
assertEquals(120, car.getX());
}
@Test
public void testMove2() throws SpeedException {
car.move();
assertEquals(120, car.getSpeed());
}
@Test (expected = SpeedException.class)
public void testAccelerate() throws SpeedException {
car.accelerate();
car.accelerate();
car.accelerate();
}
}
@BeforeClass
is used when you want to execute it just once when the class is first loaded.
Handy for connecting to the database.
@BeforeClass
public static void setupClass() throws Exception {
// Do stuff
}
@AfterClass
will be execute just once when the class finished. Suitable for cleaning up the test.
@AfterClass
public static void cleanUp() throws Exception {
// Do stuff
}
@Before
is used ==before the class is test==. Suitable for Setting up, initialise variables.
@Before
public void setup() throws Exception {
int a = 10;
Program p = new Program();
}
@After
is used ==After the class is test==. Suitable for releasing resources such as files
@After
public void free() throws Exception {
a = null;
Program p = null;
}
You can have @Before and @After as many times as you want.
@Test
is used to test the class. Use assertTrue(condition)
or assertEquals(constant, variable)
to test.
@Test
public void testCase1 throws Exception() {
a += 10;
assertEquals(10,a);
assertTrue(a>=10);
}
@Test(timeout = 10)
public void testCase2 throws Exception() {
assertTrue(a*999999 > 200000);
}
@Test(expected = SomeOtherException.class)
public void testCase3 throws Exception() {
a/=0;
}