pytest使用总结

在本专栏中无论是服务端的测试框架还是客户端的测试框架设计底层都利用的是pytest这个框架,所以有必要用一篇内容来总结整个pytest框架的使用。

通过这个框架我们能很好学习如何对其他框架进行二次开发。

笔者建议大家在学pytest的时候 ,参考 https://docs.pytest.org/en/8.2.x/how-to/usage.html   这个官方文档。所有内容看完如果消化理解的快的话需要专心看3-4天。

其中pytest中涉及到的所有api 可参考 https://docs.pytest.org/en/8.2.x/reference/reference.html 。

接下来笔者会重点对其中部分的内容组梳理和总结。

安装

首先是安装,使用下面的命令行即可安装

pip install -U pytest

pytest --version

测试目标发现

在写测试文件的时候,测试文件的命名以及文件中的命名需要符合一定的规范。而这种规范就涉及到整个pytest是如何进行测试目标发现的一些列规则。那么其使用的具体的规则如下:

  1. 从pytest命令行解析相应的测试范围:可以指定目录、文件名 或者是某一特定的节点
  2. 如果未指定
  3. 首先确定目录:
  4.  搜索配置的testpaths -》ini文件中配置目录
  5. 当前目录
  6. Ini中可以指定: norecursedirs (--ignore=path,--ignore-glob='*_01.py', --deselect=item
  7. 然后查找文件:
  8. test_*.py or *_test.py
  9. (3)然后在文件中查找具体的方法
  10. 类外的test前缀的方法
  11. Test_ class 类内部的test开头的方法,或者其内部的 @staticmethod and @classmethods的方法
  12. 同时支持以unittest.TestCase这种风格编写的case

校验

在pytest中使用assert 语句进行校验:

def test_function(): 
  assert f() == 4

同时支持重写assertion,可参考以下文档:

  1. https://docs.pytest.org/en/8.2.x/how-to/writing_plugins.html#assertion-rewriting;
  2. http://pybites.blogspot.com/2011/07/behind-scenes-of-pytests-new-assertion.html ;

例如:

但是我们一般不需要做特殊的处理,如果需要的话,那么就是我们进行二次开发的实践的机会。

在pytest中不仅支持一般的校验,还支持对程序中的异常进行捕获和校验。(这种一般用在单元测试中,对于像做接口和ui这类集成测试 基本上使用不到),具体的示例如下:

测试数据准备

对于需要生成一组特殊的case的校验函数,pytest也支持了相应的支持。这点功能就像testng中的提供的用例参数化功能一样。

主要使用的pytest的fixture 功能以及 mark.parametrize

case运行

pytest case运行的三种方式:

  1. 从命令行执行
  2. 写代码集成执行
  3. idea中配备了插件可以点击icon进行执行

命令行方式

pytest test_mod.py

pytest testing/

pytest -k 'MyClass and not method' 执行符合条件的case

pytest tests/test_mod.py::test_func

pytest tests/test_mod.py::TestClass

pytest tests/test_mod.py::TestClass::test_method

pytest tests/test_mod.py::test_func[x1,y2]

pytest -m slow

pytest --pyargs pkg.testing

pytest @tests_to_run.txt

python -m pytest   python will also add the current directory to sys.path

代码内部集成

retcode = pytest.main()

retcode = pytest.main(["-x", "mytestdir"])

IDEA集成

需要在设置中配置pytest是python默认的test 方式

Mark语法使用

在pytest可以使用mark来标记case跳过,参数化用例,使用fixture功能来生成测试数据,可以作为简单的了解

  1. usefixtures - use fixtures on a test function or class
  2. filterwarnings - filter certain warnings of a test function
  3. skip - always skip a test function
  4. skipif - skip a test function if a certain condition is met
  5. xfail - produce an “expected failure” outcome if a certain condition is met
  6. parametrize - perform multiple calls to the same test function.

同样的我们也可以自定义自己的mark:https://docs.pytest.org/en/8.2.x/example/markers.html#mark-examples

可以通过mark来控制case运行环境:https://docs.pytest.org/en/8.2.x/example/markers.html#custom-marker-and-command-line-option-to-control-test-runs

日志管理

在使用pytest做日志管理的时候主要有以下2个用途:

  1. 想在代码里面写debug情况下的日志信息
  2. Jenkins集成时输出的日志信息 想输出在jenkins中。

针对问题1的话,pytest的ini或者命令行可以配置日志的等级,而我们在写日志的时候统一使用一个日志对象即可:

self.logger.debug(self.headers)

然后就可以使用以上log进行debug日志信息的输出。

针对问题2,常规jenkins输出的信息其实都是junit.xml文件中的信息,所以我们要想办法把信息输出在这个文件中。这个部分的内容请参考接口自动化设计这一个章节。

pytest命令行参数

$ pytest --help

usage: pytest [options] [file_or_dir] [file_or_dir] [...]

 pytest的命令可以配置额外的参数来控制case的运行方式,下面对常用的一些参数进行标红处理。

positional arguments:

  file_or_dir

 

general:

  -k EXPRESSION         Only run tests which match the given substring

                        expression. An expression is a Python evaluable

                        expression where all names are substring-matched

                        against test names and their parent classes.

                        Example: -k 'test_method or test_other' matches all

                        test functions and classes whose name contains

                        'test_method' or 'test_other', while -k 'not

                        test_method' matches those that don't contain

                        'test_method' in their names. -k 'not test_method

                        and not test_other' will eliminate the matches.

                        Additionally keywords are matched to classes and

                        functions containing extra names in their

                        'extra_keyword_matches' set, as well as functions

                        which have names assigned directly to them. The

                        matching is case-insensitive.

  -m MARKEXPR           Only run tests matching given mark expression. For

                        example: -m 'mark1 and not mark2'.

  --markers             show markers (builtin, plugin and per-project ones).

  -x, --exitfirst       Exit instantly on first error or failed test

  --fixtures, --funcargs

                        Show available fixtures, sorted by plugin appearance

                        (fixtures with leading '_' are only shown with '-v')

  --fixtures-per-test   Show fixtures per test

  --pdb                 Start the interactive Python debugger on errors or

                        KeyboardInterrupt

  --pdbcls=modulename:classname

                        Specify a custom interactive Python debugger for use

                        with --pdb.For example:

                        --pdbcls=IPython.terminal.debugger:TerminalPdb

  --trace               Immediately break when running each test

  --capture=method      Per-test capturing method: one of fd|sys|no|tee-sys

  -s                    Shortcut for --capture=no

  --runxfail            Report the results of xfail tests as if they were

                        not marked

  --lf, --last-failed   Rerun only the tests that failed at the last run (or

                        all if none failed)

  --ff, --failed-first  Run all tests, but run the last failures first. This

                        may re-order tests and thus lead to repeated fixture

                        setup/teardown.

  --nf, --new-first     Run tests from new files first, then the rest of the

                        tests sorted by file mtime

  --cache-show=[CACHESHOW]

                        Show cache contents, don't perform collection or

                        tests. Optional argument: glob (default: '*').

  --cache-clear         Remove all cache contents at start of test run

  --lfnf={all,none}, --last-failed-no-failures={all,none}

                        With ``--lf``, determines whether to execute tests

                        when there are no previously (known) failures or

                        when no cached ``lastfailed`` data was found.

                        ``all`` (the default) runs the full test suite

                        again. ``none`` just emits a message about no known

                        failures and exits successfully.

  --sw, --stepwise      Exit on test failure and continue from last failing

                        test next time

  --sw-skip, --stepwise-skip

                        Ignore the first failing test but stop on the next

                        failing test. Implicitly enables --stepwise.

 

Reporting:

  --durations=N         Show N slowest setup/test durations (N=0 for all)

  --durations-min=N     Minimal duration in seconds for inclusion in slowest

                        list. Default: 0.005.

  -v, --verbose         Increase verbosity

  --no-header           Disable header

  --no-summary          Disable summary

  -q, --quiet           Decrease verbosity

  --verbosity=VERBOSE   Set verbosity. Default: 0.

  -r chars              Show extra test summary info as specified by chars:

                        (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed,

                        (p)assed, (P)assed with output, (a)ll except passed

                        (p/P), or (A)ll. (w)arnings are enabled by default

                        (see --disable-warnings), 'N' can be used to reset

                        the list. (default: 'fE').

  --disable-warnings, --disable-pytest-warnings

                        Disable warnings summary

  -l, --showlocals      Show locals in tracebacks (disabled by default)

  --no-showlocals       Hide locals in tracebacks (negate --showlocals

                        passed through addopts)

  --tb=style            Traceback print mode

                        (auto/long/short/line/native/no)

  --show-capture={no,stdout,stderr,log,all}

                        Controls how captured stdout/stderr/log is shown on

                        failed tests. Default: all.

  --full-trace          Don't cut any tracebacks (default is to cut)

  --color=color         Color terminal output (yes/no/auto)

  --code-highlight={yes,no}

                        Whether code should be highlighted (only if --color

                        is also enabled). Default: yes.

  --pastebin=mode       Send failed|all info to bpaste.net pastebin service

  --junit-xml=path      Create junit-xml style report file at given path

  --junit-prefix=str    Prepend prefix to classnames in junit-xml output

 

pytest-warnings:

-W PYTHONWARNINGS, --pythonwarnings=P

剩余60%内容,订阅专栏后可继续查看/也可单篇购买

进击的测试开发工程师2.0 文章被收录于专栏

本专栏专注于对复杂项目对测试用例编写思路、接口自动化测试用例以及自定义接口测试框架及工具对实现。

全部评论
尘姐,怎么到达7级的
点赞 回复 分享
发布于 07-29 20:32 北京
这么卷
点赞 回复 分享
发布于 08-05 01:03 广东

相关推荐

头像
10-22 17:08
安徽大学 Java
点赞 评论 收藏
分享
牛客吹哨人:我司仍然可以通过哨哥内推投递哈,大家到哨哥主页、看看最近的发帖,里面有内推链接,选择相关岗位投递即可~
点赞 评论 收藏
分享
3 6 评论
分享
牛客网
牛客企业服务