Skip to content

Commit ce804d5

Browse files
Merge pull request #32 from psobolewskiPhD/format_pytest
Some further updates to pytest to clarify and improve format
2 parents 967f76f + c3ed2a4 commit ce804d5

File tree

2 files changed

+77
-61
lines changed

2 files changed

+77
-61
lines changed

arrays/completed/test_arrays_completed.py

Lines changed: 47 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -58,52 +58,53 @@ def test_divide_arrays(a, b, expect):
5858
assert output == expect
5959

6060

61-
def test_add_arrays_errors():
62-
a = [1, 2]
63-
b = [1, 2, 3]
64-
with pytest.raises(ValueError):
65-
_ = add_arrays(a, b)
66-
a = 1
67-
b = "a"
68-
with pytest.raises(TypeError):
69-
_ = add_arrays(a, b)
70-
71-
72-
def test_subtract_arrays_errors():
73-
a = [1, 2]
74-
b = [1, 2, 3]
75-
with pytest.raises(ValueError):
76-
_ = subtract_arrays(a, b)
77-
a = 1
78-
b = "a"
79-
with pytest.raises(TypeError):
80-
_ = subtract_arrays(a, b)
81-
82-
83-
def test_multiply_arrays_errors():
84-
a = [1, 2]
85-
b = [1, 2, 3]
86-
with pytest.raises(ValueError):
87-
_ = multiply_arrays(a, b)
88-
a = 1
89-
b = "a"
90-
with pytest.raises(TypeError):
91-
_ = multiply_arrays(a, b)
92-
93-
94-
def test_divide_arrays_errors():
95-
a = [1, 2]
96-
b = [1, 2, 3]
97-
with pytest.raises(ValueError):
98-
_ = divide_arrays(a, b)
99-
a = 1
100-
b = "a"
101-
with pytest.raises(TypeError):
102-
_ = divide_arrays(a, b)
103-
a = [1, 2, 3]
104-
b = [0, 2, 3]
105-
with pytest.raises(ZeroDivisionError):
106-
_ = divide_arrays(a, b)
61+
@pytest.mark.parametrize(
62+
("a", "b", "error"),
63+
[
64+
([1, 2], [1, 2, 3], ValueError),
65+
(1, "a", TypeError),
66+
]
67+
)
68+
def test_add_arrays_errors(a, b, error):
69+
with pytest.raises(error):
70+
add_arrays(a, b)
71+
72+
73+
@pytest.mark.parametrize(
74+
("a", "b", "error"),
75+
[
76+
([1, 2], [1, 2, 3], ValueError),
77+
(1, "a", TypeError),
78+
]
79+
)
80+
def test_subtract_arrays_errors(a, b, error):
81+
with pytest.raises(error):
82+
subtract_arrays(a, b)
83+
84+
85+
@pytest.mark.parametrize(
86+
("a", "b", "error"),
87+
[
88+
([1, 2], [1, 2, 3], ValueError),
89+
(1, "a", TypeError),
90+
]
91+
)
92+
def test_multiply_arrays_errors(a, b, error):
93+
with pytest.raises(error):
94+
multiply_arrays(a, b)
95+
96+
97+
@pytest.mark.parametrize(
98+
("a", "b", "error"),
99+
[
100+
([1, 2], [1, 2, 3], ValueError),
101+
(1, "a", TypeError),
102+
([1, 2, 3], [0, 2, 3], ZeroDivisionError),
103+
]
104+
)
105+
def test_divide_arrays_errors(a, b, error):
106+
with pytest.raises(error):
107+
divide_arrays(a, b)
107108

108109

109110
@pytest.fixture

pytest.md

Lines changed: 30 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,61 +1,76 @@
11
# Testing Python code with `pytest`
22

3-
1. In this section, we are going to be using _pytest_ to run automated tests on some code. The code we are going to be using is on the **arrays** folder. The functions that will be tested are in **arrays.py**, and the test code will go in **test_arrays.py**. Testing your code is extremely important, and it should be done WHILE you are writing it, rather than AFTER.
3+
1. In this section, we are going to be using _pytest_ to run automated tests on some code. The code we are going to be using is in the **arrays** folder within this repository. The functions that will be tested are in **arrays.py**, and the test code will go in **test_arrays.py**. Testing your code is extremely important, and it should be done WHILE you are writing it, rather than AFTER.
44

55
2. Usual methods for testing code are doing some manual checks, such as running it over particular input files or variables and checking the results. This has limitations: it might fail to check some parts of the code, or it might fail to find errors that are not immediately obvious. In either case, it is also difficult to find exactly where your errors might be.
66

77
3. To avoid those pitfalls, we should write a set of tests that use known inputs and check for matching with a set of expected outputs. We will write each test to run over as little of the code as possible, such that we can easily identify which parts of the code are failing. This type of test is called a "unit test", in contrast to "integration tests" that test multiple parts of the code at once. Tip: break long, complex functions with multiple control structures (e.g. if/else, for, while) into smaller functions that do one thing each to make your code easier to read, test, and maintain.
88

9-
4. Let's start by looking into **arrays.py** and checking the **add_arrays()** function. It's a pretty simple function; it takes two arrays and add them up element-wise. Now, let's look at **test_add_arrays()** in **test_arrays.py**. How is testing being done here? Can you break it? What happens when you run `python arrays/test_arrays.py`?
9+
4. Let's start by looking into **arrays.py** and checking the `add_arrays()` function. It's a pretty simple function; it takes two arrays and add them up element-wise. Now, let's look at `test_add_arrays()` in **test_arrays.py**. How is testing being done here? Can you break it? What happens when you run:
1010

11-
5. The output of this test is not particularly useful. Imagine if you had five different functions with five different tests; would an output like `OK BROKEN OK BROKEN BROKEN` help much? Instead of that structure, we are going to use `assert` statements; `assert` is always followed by something boolean (i.e. something that will be either true or false). Empty lists, the number 0 and `None` are all false-y. If the boolean is true, nothing happens when `assert` is run; if it is false, an exception is raised. You can try running `assert 5 == 5` and `assert 5 == 6` in a python shell to see what happens.
11+
```bash
12+
python arrays/test_arrays.py
13+
```
14+
(Is your virtual environment active? If not, activate it first!)
1215

13-
6. Now we are going to replace the if/else block in **test_add_arrays()** with an assert that looks like `assert output == expect`. What happens when you run `python arrays/test_arrays.py`? What if the test fails?
16+
5. The output of this test is not particularly useful. Imagine if you had five different functions with five different tests; would an output like `OK BROKEN OK BROKEN BROKEN` help much?
17+
Instead of that structure, we are going to use `assert` statements; `assert` is always followed by something *boolean* (i.e. something that will be either true or false). Empty lists, the number 0 and `None` are all false-y. If the boolean is true, nothing happens when `assert` is run; if it is false, an exception is raised. You can try running `assert 5 == 5` and `assert 5 == 6` in a Python shell to see what happens.
1418

15-
7. You see that now at least we get a specific line when the test fails; that's a good start! However, if we had multiple tests, code execution would be stopped at the exception thrown by `assert`. Also, we still need to explicitly call `test_add_arrays()` in that test file, which would be easy to forget, especially if we had a bunch of tests. That's where we are going to be using `pytest`!
19+
6. Now we are going to replace the `if/else` block in `test_add_arrays()` with an assert that looks like `assert output == expect`. What happens when you run `python arrays/test_arrays.py`? What if the test fails?
1620
17-
8. Our first step is removing the call to `test_add_arrays()` from the end of **test_arrays.py**; pytest will take care of that for us. Now, in your terminal, just run `pytest`. What happened? What if the test fails?
21+
7. You see that now at least we get a specific line when the test fails; that's a good start! However, if we had multiple tests, code execution would be stopped at the exception thrown by `assert`. Also, we still need to explicitly call `test_add_arrays()` in that test file, which would be easy to forget, especially if we had a bunch of tests. That's where we are going to be using [`pytest`](https://docs.pytest.org/en/stable/)!
22+
23+
8. Our first step is removing the call to `test_add_arrays()` from the end of `test_arrays.py`; `pytest` will take care of that for us. Now, in your terminal, just run `pytest`. What happened? What if the test fails?
1824
1925
9. `pytest` will find all files named `test_*.py` and `*_test.py` and all functions starting with names starting with `test` inside these files, and it will run those, one at a time, reporting the results of each.
2026
21-
10. It's your turn; write a `test_subtract_arrays()` function that tests the `subtract_arrays()` function! What happens when you run `pytest` now?
27+
10. **Exercise:** It's your turn to write a test! Write a `test_subtract_arrays()` function in `test_arrays.py` that tests the `subtract_arrays()` function in `arrays.py`! What happens when you run `pytest` now?
2228
23-
11. Let's do the opposite now; write a `test_multiply_arrays()` function with the behavior you would expect to see from a `multiply_arrays()` function, and then write `multiply_arrays()` to make sure the tests pass! This is a process called _Test-driven development_ (TDD) - you start by writing your code requirements as tests, and then write code that passes those tests. It is a popular approach in certain kinds of software development.
29+
11. **Exercise:** Let's do the opposite: write a `test_multiply_arrays()` function with the behavior you would *expect* to see from a `multiply_arrays()` function. Then, write the `multiply_arrays()` to fulfill the test requirements.
30+
This is a process called _Test-driven development_ (TDD): you start by writing your code requirements as tests and then write code that passes those tests. It is a popular approach in certain kinds of software development.
2431
25-
12. Now imagine you want to test multiple cases in `test_add_arrays()` - positive results, negative results, zero results, for example. You could change the code in that test function to create the `a`, `b` and `expect` arrays multiple times, and do one assertion per case. However, pytest allows for a simpler possibility: parameterizing inputs to your test. You can do this using the decorator [`@pytest.mark.parametrize()`](https://docs.pytest.org/en/stable/how-to/parametrize.html) before your test function. It takes two arguments: a *tuple or string* with the *names* of the parameters you want to pass to this function, and a *list* containing *tuples* of values of the parameters you want to pass. So for a single case, it would look like `@pytest.mark.parametrize(("a", "b", "expect"), [([1, 2, 3], [4, 5, 6], [5, 7, 9])])`, and you could add extra tuples for additional test cases. Then all you need to do is add `a`, `b` and `expect` as arguments to your test function.
32+
12. Now imagine you want to test multiple cases in `test_add_arrays()`: positive results, negative results, zero results, etc. You could change the code in that test function to create the `a`, `b` and `expect` arrays multiple times, and do one assertion per case.
33+
However, `pytest` allows for a simpler possibility: parameterizing inputs to your test. You can do this using the decorator [`@pytest.mark.parametrize()`](https://docs.pytest.org/en/stable/how-to/parametrize.html) before your test function.
34+
It takes two arguments: a *tuple or string* with the *names* of the parameters you want to pass to this function, and a *list* containing *tuples* of values of the parameters you want to pass. So for a single case, it would look like `@pytest.mark.parametrize(("a", "b", "expect"), [([1, 2, 3], [4, 5, 6], [5, 7, 9])])`, and you could add extra tuples for additional test cases. Then all you need to do is add `a`, `b` and `expect` as arguments to your test function.
2635
27-
13. Try doing that for your test functions. What happens when you run `pytest` now? Can you find the errors in the `divide_arrays()` function with some clever testing? (hint: there are two errors)
36+
13. **Exercise:** Try using `@pytest.mark.parametrize()` for your test functions. What happens when you run `pytest` now? Write a test for `divide_arrays()` using this approach. Can you find the bugs in the `divide_arrays()` function with some clever testing?
2837
2938
14. So far, we have assumed that everything passed to our array functions is correct; that is rarely an assumption you can make in real life. What happens if you run `add_arrays("this is a string", 1)`? What about `add_arrays([1, 2], [1, 2, 3])`?
3039
3140
15. It's time to add explicit exception handling to our functions. You will probably want to do `raise ValueError("array size mismatch")` for the case where arrays sizes are different, and `raise TypeError("arguments should be lists")` for when the arguments are not lists.
3241
3342
16. Now, we can add new test functions named `test_add_arrays_error()` and so on, where we check if errors are being raised correctly. That is done by wrapping our function call with `with pytest.raises(ValueError)`, for example. What happens when you run `pytest` now? Add checks for both possible errors we came up with. What other cases can happen in e.g. `divide_arrays()`?
3443
35-
17. We have successfully found a way to separate the data to be tested from the code to be tested. `pytest` has an even better way to do that for more complex cases, for example, when you want multiple test functions using the same data. They are called [_fixtures_](https://docs.pytest.org/en/stable/explanation/fixtures.html#about-fixtures). A _fixture_ is defined as a function that returns something we want to use repeatedly in our tests. `pytest` provides [some fixtures out of the box](https://docs.pytest.org/en/stable/reference/fixtures.html), like the very useful `tmp_path` fixture that gives you a unique temporary location. But you can also create your own: fixtures can be used to set up test data, create mock objects, or perform any other setup tasks that are needed for your tests. To define a fixture, you use the decorator `@pytest.fixture` before a function. After defining a fixture, you can pass the name of the function as an argument in your test function, and that argument will assume the value that is returned by the fixture.
44+
17. We have successfully found a way to separate the data to be tested from the code to be tested. `pytest` has an even better way to do that for more complex cases, for example, when you want multiple test functions using the same data. They are called [_fixtures_](https://docs.pytest.org/en/stable/explanation/fixtures.html#about-fixtures).
45+
A _fixture_ is defined as a function that returns something we want to use repeatedly in our tests. `pytest` provides [some fixtures out of the box](https://docs.pytest.org/en/stable/reference/fixtures.html), like the very useful `tmp_path` fixture that gives you a unique temporary location.
46+
But you can also create your own: fixtures can be used to set up test data, create mock objects, or perform any other setup tasks that are needed for your tests. To define a fixture, you use the decorator `@pytest.fixture` before a function. After defining a fixture, you can pass the name of the function as an argument in your test function, and that argument will assume the value that is returned by the fixture.
3647
3748
18. Try creating a `pair_of_lists()` fixture and passing it to a test function. What happens when you run `pytest`? is `pair_of_lists()` run?
3849
50+
19. Some final tips: append `-v` for more verbose output or `-s` to see `print()` outputs. You can also run specific tests by passing the file name and function name to `pytest`, e.g. `pytest arrays/test_arrays.py::test_add_arrays`.
3951
4052
## Test coverage
4153
4254
1. When writing tests, it's important to know how much of your code is actually being tested by your test suite. This is called **code coverage**. Code coverage is a metric that tells you what percentage of your code is run ("covered") when your tests are executed. High coverage means most of your code is tested, while low coverage means there are many untested parts, where bugs could hide.
4355
4456
2. The most common tool for measuring code coverage in Python is [`coverage`](https://coverage.readthedocs.io/). You can run it from the command line to see how much of your code is covered by your tests.
4557
46-
3. To use `coverage` with ``pytest, run:
47-
```sh
58+
3. To use `coverage` with `pytest`, run:
59+
60+
```bash
4861
coverage run -m pytest
4962
```
5063
This will run your tests and collect coverage data.
5164
5265
4. To see a summary in your terminal, run:
53-
```sh
66+
67+
```bash
5468
coverage report
5569
```
5670
5771
5. To generate a detailed HTML report you can view in your browser, run:
58-
```sh
72+
73+
```bash
5974
coverage html
6075
```
6176
Then open the file `htmlcov/index.html` in your browser to explore which lines of code are covered and which are not.

0 commit comments

Comments
 (0)