I read a lot about unit testing, but unfortunately I usually work on single-developer projects with stressing time constraints, so I never created
full tests for an entire project in the past. This means I'm a newbie in this
aspect of software development.
I know the importance of testing, but we have to admit that it increases the cost of software development a lot, at least at the beginning. Not always we have the possibility to invest this price.
These days I'm working on a calendar scheduler module. The client of this module can configure up to N events that could be:
- single (one shot)
- weekly (for example, on Monday and Saturday of every weeks)
- monthly (for example, the days 3-5-15 of every months)
- yearly (for example, the day 7 of months Jan, Feb and Mar)
Weekly, monthly and yearly events have a starting time and *could* have a maximum number of repetitions (or they could be forever).
The calendar module depends on some other modules. First of all, it asks for the current time as time_t. It calls make_actions() function, with certain parameters, when an event occurrence expired.
I'm confused. How to scientifically approach this testing problem? How to avoid
the proliferation of tests? Which tests are really important and how to write
them?
On 8/27/2024 3:52 AM, pozz wrote:
I read a lot about unit testing, but unfortunately I usually work on
single-developer projects with stressing time constraints, so I never
created full tests for an entire project in the past. This means I'm a
newbie in this aspect of software development.
Fix it now or fix it later -- when you have even LESS time (because customers
are using your defective product)
Testing should start when you define the module, continue while you are implementing it (you will likely notice "conditions" that could lead to
bogus behavior as you are writing them!), and when you consider it "done".
Thinking about testing when you draft the specification helps you
challenge your notion of the suitability of such a module for the task(s)
at hand as you imagine use cases (and MISuse cases).
I know the importance of testing, but we have to admit that it
increases the cost of software development a lot, at least at the
beginning. Not always we have the possibility to invest this price.
If you assume there are two types of "software" -- stuff that TRIES to work and stuff that HOPES to work, then the cost of the latter can be a lot less...
because you really don't care *if* it works! Apples; Oranges.
These days I'm working on a calendar scheduler module. The client of
this module can configure up to N events that could be:
- single (one shot)
- weekly (for example, on Monday and Saturday of every weeks)
- monthly (for example, the days 3-5-15 of every months)
- yearly (for example, the day 7 of months Jan, Feb and Mar)
Weekly, monthly and yearly events have a starting time and *could*
have a maximum number of repetitions (or they could be forever).
Testing aims to prove that:
- your specification for the module accurately reflects its need (suitability)
- the module actually implements the specification (compliance)
- the module is well-behaved in "all" possible scenarios, even when misused
- changes to the module haven't compromised past performance (regression)
It also gives you an idea of how your "process" is working; if you are finding *lots* of bugs, perhaps you should be testing more aggressively earlier in the process (there is a tendency to NOT want to make lots of changes/fixes to code that you've convinced yourself is "almost done")
And, it provides exemplars that you can use to evaluate performance.
The calendar module depends on some other modules. First of all, it
asks for the current time as time_t. It calls make_actions() function,
with certain parameters, when an event occurrence expired.
Treat each as an independent, testable entity. This makes it easier to design test cases and easier to isolate anomalous behavior(s).
I'm confused. How to scientifically approach this testing problem? How
to avoid the proliferation of tests? Which tests are really important
and how to write them?
Make a concerted effort thinking of how to *break* it. E.g., If you try to schedule an event for some time in the past, how should it react?
Should it
immediately "trigger" the event? Should it silently dismiss the event? Should it throw an error?
What if "the past" was just half a second ago and you've been unlucky
enough that your task was delayed a bit so that the clock ticked off
another second before you got a chance to schedule your event AHEAD of
time?
If there are multiple steps to scheduling an event (e.g., creating a structure
and then passing it on to a scheduler), consider if one of the steps might (intentionally!) be bypassed and how that might inject faulty behavior into your design. E.g., if you do all of your sanity checks in the "create structure" step, BYPASSING that step and passing a structure created
by some other means (e.g., const data) avoids that sanity checking; will
the scheduler gag on possibly "insane" data introduced in such a manner?
Can a client become confused as to which structures are "still active"
vs. "already consumed"? If an active structure is altered, can that
lead to an inconsistent state (e.g., if the scheduler has acted on *part*
of the information but still relying on the balance to complete the
action)?
Can a client safely repurpose an event specification? Or, once created, does the scheduler "own" it? Is there some safety window in which such alterations won't "confuse" the scheduler, outside of which the scheduler
may have already taken some actions on the assumption that the event IS
still scheduled?
What happens if someone changes the current *time*? Do all events that are now "past" instantly trigger? Are they dismissed? Do they move forward or backwards in time based on the delta introduced to the current time?
[This is a common flaw in folks trying to optimize such subsystems.
There is
usually a need for relative events AND absolute events as an
acknowledgement
that "time" changes]
These interactions with the rest of the system (clients) can help you
think about the DESIRED functionality and the actual use patterns. You
may discover your implementation strategy is inherently faulty rendering
the
*specification* defective.
Il 29/08/2024 22:56, Don Y ha scritto:
On 8/27/2024 3:52 AM, pozz wrote:
I read a lot about unit testing, but unfortunately I usually work on
single-developer projects with stressing time constraints, so I never
created full tests for an entire project in the past. This means I'm
a newbie in this aspect of software development.
Fix it now or fix it later -- when you have even LESS time (because
customers
are using your defective product)
Testing should start when you define the module, continue while you are
implementing it (you will likely notice "conditions" that could lead to
bogus behavior as you are writing them!), and when you consider it
"done".
Thinking about testing when you draft the specification helps you
challenge your notion of the suitability of such a module for the task(s)
at hand as you imagine use cases (and MISuse cases).
I know the importance of testing, but we have to admit that it
increases the cost of software development a lot, at least at the
beginning. Not always we have the possibility to invest this price.
If you assume there are two types of "software" -- stuff that TRIES to
work
and stuff that HOPES to work, then the cost of the latter can be a lot
less...
because you really don't care *if* it works! Apples; Oranges.
These days I'm working on a calendar scheduler module. The client of
this module can configure up to N events that could be:
- single (one shot)
- weekly (for example, on Monday and Saturday of every weeks)
- monthly (for example, the days 3-5-15 of every months)
- yearly (for example, the day 7 of months Jan, Feb and Mar)
Weekly, monthly and yearly events have a starting time and *could*
have a maximum number of repetitions (or they could be forever).
Testing aims to prove that:
- your specification for the module accurately reflects its need
(suitability)
- the module actually implements the specification (compliance)
- the module is well-behaved in "all" possible scenarios, even when
misused
- changes to the module haven't compromised past performance (regression)
It also gives you an idea of how your "process" is working; if you are
finding *lots* of bugs, perhaps you should be testing more aggressively
earlier in the process (there is a tendency to NOT want to make lots of
changes/fixes to code that you've convinced yourself is "almost done")
And, it provides exemplars that you can use to evaluate performance.
The calendar module depends on some other modules. First of all, it
asks for the current time as time_t. It calls make_actions()
function, with certain parameters, when an event occurrence expired.
Treat each as an independent, testable entity. This makes it easier to
design test cases and easier to isolate anomalous behavior(s).
I'm confused. How to scientifically approach this testing problem?
How to avoid the proliferation of tests? Which tests are really
important and how to write them?
Make a concerted effort thinking of how to *break* it. E.g., If you
try to
schedule an event for some time in the past, how should it react?
Should it
immediately "trigger" the event? Should it silently dismiss the event?
Should it throw an error?
What if "the past" was just half a second ago and you've been unlucky
enough that your task was delayed a bit so that the clock ticked off
another second before you got a chance to schedule your event AHEAD of
time?
If there are multiple steps to scheduling an event (e.g., creating a
structure
and then passing it on to a scheduler), consider if one of the steps
might
(intentionally!) be bypassed and how that might inject faulty behavior
into
your design. E.g., if you do all of your sanity checks in the "create
structure" step, BYPASSING that step and passing a structure created
by some other means (e.g., const data) avoids that sanity checking; will
the scheduler gag on possibly "insane" data introduced in such a manner?
Can a client become confused as to which structures are "still active"
vs. "already consumed"? If an active structure is altered, can that
lead to an inconsistent state (e.g., if the scheduler has acted on *part*
of the information but still relying on the balance to complete the
action)?
Can a client safely repurpose an event specification? Or, once created,
does the scheduler "own" it? Is there some safety window in which such
alterations won't "confuse" the scheduler, outside of which the scheduler
may have already taken some actions on the assumption that the event IS
still scheduled?
What happens if someone changes the current *time*? Do all events
that are
now "past" instantly trigger? Are they dismissed? Do they move
forward or
backwards in time based on the delta introduced to the current time?
[This is a common flaw in folks trying to optimize such subsystems.
There is
usually a need for relative events AND absolute events as an
acknowledgement
that "time" changes]
These interactions with the rest of the system (clients) can help you
think about the DESIRED functionality and the actual use patterns. You
may discover your implementation strategy is inherently faulty
rendering the
*specification* defective.
Thank you for your reply, Don. They are valuable words that I read and
hear many times. However I'm in trouble to translate them into real
testing.
When you write, test for this, test for that, what happens if the client uses the module in a wrong way, what happens when the system clock
changes a little or a big, and when the task missed the exact timestamp
of an event?
I was trying to write tests for *all* of those situations, but it seemed
to me a very, VERY, *VERY* big job. The implementation of the calendar module took me a couple of days, tests seem an infinite job.
I have four types of events, for each test I should check the correct behaviour for each type.
What happen if the timestamp of an event was already expired when it is added to the system? I should write 4 tests, one for each type.
AddOneShotEventWithExpiredTimestamp_NoActions AddWeeklyEventWithExpiredTimestamp_NoActions AddMonthlyEventWithExpiredTimestamp_NoActions AddYearlyEventWithExpiredTimestamp_NoActions
When you write, test for this, test for that, what happens if the client uses
the module in a wrong way, what happens when the system clock changes a little
or a big, and when the task missed the exact timestamp of an event?
I was trying to write tests for *all* of those situations, but it seemed to me
a very, VERY, *VERY* big job. The implementation of the calendar module took me
a couple of days, tests seem an infinite job.
I have four types of events, for each test I should check the correct behaviour
for each type.
What happen if the timestamp of an event was already expired when it is added
to the system? I should write 4 tests, one for each type.
AddOneShotEventWithExpiredTimestamp_NoActions AddWeeklyEventWithExpiredTimestamp_NoActions AddMonthlyEventWithExpiredTimestamp_NoActions AddYearlyEventWithExpiredTimestamp_NoActions
What does it mean "expired timestamp"? Suppose the event timestamp is
"01/01/2024 10:00:00". This timestamp could be expired for a few seconds, a few
minutes or one day or months or years. Maybe the module performs well when the
system time has a different date, but bad if the timestamp expired in the same
day, for example "01/01/2024 11:00:00" or "01/01/2024 10:00:01".
Should I add:
AddOneShotEventWithExpiredTimestamp1s_NoActions AddOneShotEventWithExpiredTimestamp1m_NoActions AddOneShotEventWithExpiredTimestamp1h_NoActions AddOneShotEventWithExpiredTimestamp1d_NoActions AddWeeklyEventWithExpiredTimestamp1s_NoActions AddWeeklyEventWithExpiredTimestamp1m_NoActions AddWeeklyEventWithExpiredTimestamp1h_NoActions AddWeeklyEventWithExpiredTimestamp1d_NoActions AddMonthlyEventWithExpiredTimestamp1s_NoActions AddMonthlyEventWithExpiredTimestamp1m_NoActions AddMonthlyEventWithExpiredTimestamp1h_NoActions AddMonthlyEventWithExpiredTimestamp1d_NoActions AddYearlyEventWithExpiredTimestamp1s_NoActions AddYearlyEventWithExpiredTimestamp1m_NoActions AddYearlyEventWithExpiredTimestamp1h_NoActions AddYearlyEventWithExpiredTimestamp1d_NoActions
They are 16 tests for just a single stupid scenario. If I continue this way, I
will thousands of tests. I don't think this is the way to make testing, do I?
On 8/30/2024 1:18 AM, pozz wrote:
When you write, test for this, test for that, what happens if the
client uses the module in a wrong way, what happens when the system
clock changes a little or a big, and when the task missed the exact
timestamp of an event?
I was trying to write tests for *all* of those situations, but it
seemed to me a very, VERY, *VERY* big job. The implementation of the
calendar module took me a couple of days, tests seem an infinite job.
Because there are lots of ways your code can fail. You have to prove
that it doesn't fail in ANY of those ways.
uint multiply(multiplicand, multiplier) {
return(6)
}
works well for the test cases:
2,3
3,2
6,1
1,6
but not so well for:
8,5
17,902
1,1
etc.
I have four types of events, for each test I should check the correct
behaviour for each type.
What happen if the timestamp of an event was already expired when it
is added to the system? I should write 4 tests, one for each type.
AddOneShotEventWithExpiredTimestamp_NoActions
AddWeeklyEventWithExpiredTimestamp_NoActions
AddMonthlyEventWithExpiredTimestamp_NoActions
AddYearlyEventWithExpiredTimestamp_NoActions
Chances are, there is one place in your code that is aware of the fact that the event is scheduled for a PAST time. So, you only need to create one test.
(actually, two -- one that proves one behavior for time *almost* NOT
past and
another for time JUST past)
Your goal (having already implemented the modules) is to exercise each
path through the code.
whatever() {
...
if (x > y) {
// do something
} else {
// do something else
}
...
}
Here, there are only two different paths through the code:
- one for x > y
- one for !(x > y)
So, you need to create test cases that will exercise each path.
To verify your "x > y" test, you would want to pick an x that is
just detectably larger than y. And, another case where x is as
large as possible WITHOUT exceeding y. You can view this as defining
the "edge" between the two routes.
If, for example, you picked x = 5 and x = 3 as your test cases
(where y = 4), then you WOULD exercise both paths. But, if you had mistakenly coded this as
if (x >= y) {
// do something
} else {
// do something else
}
you wouldn't be able to detect that fault, whereas using x = 5 and x = 4 would cause you to wonder why "do something else" never got executed!
What does it mean "expired timestamp"? Suppose the event timestamp is
A time that is "in the past". If it is time 't', now, what happens if the client specifies an event to happen at time t-1? Should you *immediately* activate the event (because NOW it is t > t-1?) Or, should you discard it because it was SUPPOSED to happen 1 second ago?
What if t-495678? Is there a different type of action you expect if the time is "a long time ago" vs. "just recently"?
Do events happen at *instants* in time? Or, in CONDITIONS of time?
If they happen at instants, then you have to ensure you can discern one instant from another.
"01/01/2024 10:00:00". This timestamp could be expired for a few
seconds, a few minutes or one day or months or years. Maybe the module
performs well when the system time has a different date, but bad if
the timestamp expired in the same day, for example "01/01/2024
11:00:00" or "01/01/2024 10:00:01".
Should I add:
AddOneShotEventWithExpiredTimestamp1s_NoActions
AddOneShotEventWithExpiredTimestamp1m_NoActions
AddOneShotEventWithExpiredTimestamp1h_NoActions
AddOneShotEventWithExpiredTimestamp1d_NoActions
AddWeeklyEventWithExpiredTimestamp1s_NoActions
AddWeeklyEventWithExpiredTimestamp1m_NoActions
AddWeeklyEventWithExpiredTimestamp1h_NoActions
AddWeeklyEventWithExpiredTimestamp1d_NoActions
AddMonthlyEventWithExpiredTimestamp1s_NoActions
AddMonthlyEventWithExpiredTimestamp1m_NoActions
AddMonthlyEventWithExpiredTimestamp1h_NoActions
AddMonthlyEventWithExpiredTimestamp1d_NoActions
AddYearlyEventWithExpiredTimestamp1s_NoActions
AddYearlyEventWithExpiredTimestamp1m_NoActions
AddYearlyEventWithExpiredTimestamp1h_NoActions
AddYearlyEventWithExpiredTimestamp1d_NoActions
They are 16 tests for just a single stupid scenario. If I continue
this way, I will thousands of tests. I don't think this is the way to
make testing, do I?
You declare what scenario you are testing for as a (commentary) preface
to the test stanza.
If you are testing to ensure "NoActions" is handled correctly, then
you look to see how many ways the "NoActions" criteria can tickle
the code.
If there is only ONE place where "NoActions" alters the flow through
the code, then you only need one test (actually, two as you need
to cover "SomeAction" to show that "NoAction" is different).
In a different test scenaario, you would test that 1s, 1m, 1h, 1d,
etc. are all handled correctly IF EACH OF THOSE PASSED THROUGH YOUR
CODE OVER DIFFERENT PATHWAYS.
And, elsewhere, you might test to see that "repeated" events
operate correctly.
You "prove" that one scenario is handled correctly and then
don't need to reexamine those various tests again in any other
scenario UNLESS THEY ALTER THE PATH THROUGH THE CODE.
Your understanding of how the code would LIKELY be crafted lets
you determine some of these tests before you've written ANY
code. E.g., I suggested "expired events" because I am reasonably
sure that SOMEWHERE your code is looking at "event time" vs. "now"
so you would need to test that comparison.
Your knowledge of how the code is *actually* crafted lets you
refine your test cases to cover specifics of YOUR implementation.
Note that test cases that are applied to version 1 of the code should
yield the same results in version 305, even if the implementation
changes dramatically. Because the FUNCTIONALITY shouldn't be
changing.
So, you can just keep adding test cases to your test suite;
you don't ever need to remove any.
[If a test STOPS working, you have to ask yourself how you have BROKEN/changed the function of the module]
For example, I could implement a "long" integer multiplication routine
by adding the multiplicand to an accumulator a number of times
dictated by the multiplier. I can create test cases for this implementation.
Later, I could revise the routine to use a shift-and-add approach.
BUT, THE ORIGINAL TEST CASES SHOULD STILL PASS! I might,
however, have to add some other tests to identify failures in
the shift logic in this new approach (e.g., if I only examined
the rightmost 26 of the bits in the multiplier, then a large
value multiplier would fail in this approach but could pass
in the repeated addition implementation). This would be
evident to me in looking at the code because there would be
a different path through the code when the "last bit" had been
checked.
I know the importance of testing, but we have to admit that it increases
the cost of software development a lot, at least at the beginning. Not
always we have the possibility to invest this price.
First of all, I have a great confusion in my mind about the subtle differences about mocks, stubs, fakes, dummies and so on. Anyway I think these names are not so important, so go on.[...]
The interface is very simple. I have some functions to initialize the configuration of an event (a simple C struct):[...]
I have a function that is called every second that triggers actions on occurrences:
void calendar_task(void);
Now the problem is... which tests to write?
The combinations and possibilities are very high. calendar_init() can be called with only 1 event, with 2 events and so on. And the behaviour for these cases must be tested, because it should behaves well with 1 event,
but not with 4 events.
I'm confused. How to scientifically approach this testing problem? How
to avoid the proliferation of tests?
Which tests are really important and how to write them?
Il 30/08/2024 14:21, Don Y ha scritto:
On 8/30/2024 1:18 AM, pozz wrote:
When you write, test for this, test for that, what happens if the client >>> uses the module in a wrong way, what happens when the system clock changes a
little or a big, and when the task missed the exact timestamp of an event? >>>
I was trying to write tests for *all* of those situations, but it seemed to
me a very, VERY, *VERY* big job. The implementation of the calendar module >>> took me a couple of days, tests seem an infinite job.
Because there are lots of ways your code can fail. You have to prove
that it doesn't fail in ANY of those ways.
So you're confirming it's a very tedious and long job.
Chances are, there is one place in your code that is aware of the fact that >> the event is scheduled for a PAST time. So, you only need to create one test.
(actually, two -- one that proves one behavior for time *almost* NOT past and
another for time JUST past)
I read that tests shouldn't be written for the specific implementation, but should be generic enough to work well even if the implementation changes.
Your goal (having already implemented the modules) is to exercise each
path through the code.
whatever() {
...
if (x > y) {
// do something
} else {
// do something else
}
...
}
Here, there are only two different paths through the code:
- one for x > y
- one for !(x > y)
So, you need to create test cases that will exercise each path.
Now I really know there are only two paths in the current implementation, but
I'm not sure this will stay the same in the future.
Note that test cases that are applied to version 1 of the code should
yield the same results in version 305, even if the implementation
changes dramatically. Because the FUNCTIONALITY shouldn't be
changing.
Ok, but if you create tests knowing how you will implement functionalities (execution paths), it's possible they will not be sufficient when the implementation change at version 305.
Before implementing the function I can imagine the following test cases:
assert(square(0) == 0)
assert(square(1) == 1)
assert(square(2) == 4)
assert(square(15) == 225)
Now the developer writes the function this way:
unsigned char square(unsigned char num) {
if (num == 0) return 0;
if (num == 1) return 1;
if (num == 2) return 4;
if (num == 3) return 9;
if (num == 4) return 16;
if (num == 5) return 35;
if (num == 6) return 36;
if (num == 7) return 49;
...
if (num == 15) return 225;
}
My tests pass, but the implementation is wrong. To avoid this I, writing tests,
should add so many test cases that I get a headache.
...I'm a newbie in this aspect of software development.
I know the importance of testing, but we have to admit that it
increases the cost of software development a lot, at least at the
beginning. Not always we have the possibility to invest this price.
I read a lot about unit testing, but unfortunately I usually work on single-developer projects with stressing time constraints, so I never created full tests for an entire project in the past. This means I'm a newbie in this aspect of software development.
I know the importance of testing, but we have to admit that it increases
the cost of software development a lot, at least at the beginning. Not always we have the possibility to invest this price.
Everytime I start writing some tests, I eventually think I'm wasting my precious time. Most probably because I'm not able to create valid tests.
So I'm asking you to help on a real case.
First of all, I have a great confusion in my mind about the subtle differences about mocks, stubs, fakes, dummies and so on. Anyway I think these names are not so important, so go on.
These days I'm working on a calendar scheduler module. The client of
this module can configure up to N events that could be:
- single (one shot)
- weekly (for example, on Monday and Saturday of every weeks)
- monthly (for example, the days 3-5-15 of every months)
- yearly (for example, the day 7 of months Jan, Feb and Mar)
Weekly, monthly and yearly events have a starting time and *could* have
a maximum number of repetitions (or they could be forever).
The interface is very simple. I have some functions to initialize the configuration of an event (a simple C struct):
void calev_config_init_single(CalendarEventConfig *config, time_t
timestamp, CalendarEventActions *actions);
void calev_config_init_weekly(CalendarEventConfig *config, time_t
timestamp, uint8_t weekdays, unsigned int nrep, CalendarEventActions *actions);
void calev_config_init_monthly(CalendarEventConfig *config, time_t timestamp, uint32_t mdays, unsigned int nrep, CalendarEventActions *actions);
void calev_config_init_yearly(CalendarEventConfig *config, time_t
timestamp, uint16_t months, unsigned int nrep, CalendarEventActions *actions);
I have a function that initializes the module with some pre-programmed events:
void calendar_init(CalendarEventConfig *list_events, size_t num_events);
I have a function that is called every second that triggers actions on occurrences:
void calendar_task(void);
So, the client of calendar module usually does the following:
CalendarEventConfig events[4];
calev_config_init_...(&events[0], ...
calev_config_init_...(&events[1], ...
calev_config_init_...(&events[2], ...
calev_config_init_...(&events[3], ...
calendar_init(events, 4);
while(1) {
calendar_task(); // every second
...
}
The calendar module depends on some other modules. First of all, it asks
for the current time as time_t. It calls make_actions() function, with certain parameters, when an event occurrence expired.
I know how to fake the time, replacing the system time with a fake time.
And I know how to create a mock to check make_actions() calls and parameters.
Now the problem is... which tests to write?
I started writing some tests, but after completed 30 of them, I'm
thinking my work is not valid.
I was tempted to write tests in this way:
TEST(TestCalendar, OneWeeklyEvent_InfiniteRepetition)
{
CalendarEventConfig cfg;
calev_config_init_weekly(&cfg, parse_time("01/01/2024 10:00:00"),
MONDAY | SATURDAY, 0, &actions);
set_time(parse_time("01/01/2024 00:00:00")); // It's monday
calendar_init(&cfg, 1);
set_time(parse_time("01/01/2024 10:00:00")); // First occurrence
mock().expectOneCall("make_actions")...
calendar_task();
set_time(parse_time("06/01/2024 10:00:00")); // It's saturday
mock().expectOneCall("make_actions")...
calendar_task();
set_time(parse_time("08/01/2024 10:00:00")); // It's monday again
mock().expectOneCall("make_actions")...
calendar_task();
mock().checkExpectations();
}
However it seems there are many sub-tests inside OneWeeklyEvent_InfiniteRepetition test (the first occurrence, the second
and third).
The tests should have a single assertion and should test a very specific behaviour. So I split this test in:
TEST(TestCalendar, OneWeeklyEventInfiniteRepetition_FirstOccurrence) TEST(TestCalendar, OneWeeklyEventInfiniteRepetition_SecondOccurrence) TEST(TestCalendar, OneWeeklyEventInfiniteRepetition_ThirsOccurrence)
What else? When to stop?
Now for the weekly event with only 5 repetitions.
TEST(TestCalendar, OneWeeklyEvent5Repetitions_FirstOccurrence) TEST(TestCalendar, OneWeeklyEvent5Repetition_SecondOccurrence) TEST(TestCalendar, OneWeeklyEvent5Repetition_SixthOccurrence_NoActions)
The combinations and possibilities are very high. calendar_init() can be called with only 1 event, with 2 events and so on. And the behaviour for these cases must be tested, because it should behaves well with 1 event,
but not with 4 events.
The events can be passed to calendar_init() in a random (not
cronologically) order. I should test this behaviour too.
There could be one-shot, weekly with infinite repetitions, weekly with a
few repetitions, monthly... yearly, with certain days in common...
calendar_init() can be called when the current time is over the starting timestamp of all events. In some cases, there could be future
occurrences yet (infinite repetitions) and in others that event can be completely expired (limited repetitions).
I'm confused. How to scientifically approach this testing problem? How
to avoid the proliferation of tests? Which tests are really important
and how to write them?
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 150:16:34 |
Calls: | 13,691 |
Calls today: | 1 |
Files: | 186,936 |
D/L today: |
439 files (115M bytes) |
Messages: | 2,410,978 |