Compare commits

...

21 Commits

Author SHA1 Message Date
7e34aa106d sphinx documentation; adjusted all docstrings; moved some modules to non-public subpackage 2022-03-24 13:38:30 +01:00
4c6a5412ca removed num_cancellations and num_finished from the pool interface; added num_cancelled; made the num argument non-optional in the start and stop methods of the SimpleTaskPool; changed some internals; improved docstrings 2022-03-17 13:52:02 +01:00
44c03cc493 catching exceptions in _queue_consumer meta task 2022-03-16 16:57:03 +01:00
689a74c678 control interface now supports TaskPool instances:
dotted paths to coroutine functions can be passed to the parser as arguments for methods like `map`;
parser supports literal evaluation for the argument iterables in methods like `map`;
minor fixes
2022-03-16 11:27:27 +01:00
3503c0bf44 removed a few functions from the public API; fixed some docstrings/comments 2022-03-14 19:16:28 +01:00
3d104c979e typo 2022-03-14 18:13:51 +01:00
a92e646411 improved and extended usage/readme docs; small fixes 2022-03-14 18:09:30 +01:00
3d84e1552b version bump 2022-03-13 16:12:04 +01:00
38f4ec1b06 finally reached 100% unittest coverage overall 2022-03-13 16:11:20 +01:00
6f082288d8 brought unittest coverage up to 100% on control modules 2022-03-13 15:44:53 +01:00
9fde231250 moved control-related modules to a sub-package; minor corrections 2022-03-13 15:18:53 +01:00
c72a5035ea big rework of the session-parser-interaction;
dynamically adding pool methods/properties as parser commands;
dynamically executing selected pool method/property;
greatly simplified `ControlSession` class;
removed the need for hard-coded command names;
adjusted unittests accordingly
2022-03-13 14:56:56 +01:00
eb152e4d75 fixed unix server/client tests 2022-03-08 10:22:07 +01:00
d05f84b2c3 additional base commands for control server 2022-03-08 10:15:10 +01:00
7c66604ad0 renamed method get_task_group_ids and extended to accept any number of group names 2022-03-08 09:08:28 +01:00
287906a218 added unittests 2022-03-08 09:05:59 +01:00
ce0f9a1f65 version bump 2022-02-25 22:44:25 +01:00
5dad4ab0c7 implemented TCP socket control; switched example to TCP 2022-02-25 22:42:37 +01:00
ae6bb1bd17 skipping unix socket tests on Windows 2022-02-25 21:17:14 +01:00
e501a849f3 clarifications and corrections 2022-02-25 19:57:54 +01:00
ed6badb088 moved imports for unix socket connections to init methods of client and server 2022-02-25 19:09:28 +01:00
55 changed files with 2613 additions and 1358 deletions

View File

@ -5,7 +5,6 @@ omit =
.venv/* .venv/*
[report] [report]
fail_under = 100
show_missing = True show_missing = True
skip_covered = False skip_covered = False
exclude_lines = exclude_lines =

3
.gitignore vendored
View File

@ -3,9 +3,10 @@
# IDE settings: # IDE settings:
/.idea/ /.idea/
/.vscode/ /.vscode/
# Distribution / packaging: # Distribution / build files:
*.egg-info/ *.egg-info/
/dist/ /dist/
/docs/build/
# Python cache: # Python cache:
__pycache__/ __pycache__/
# Testing: # Testing:

View File

@ -2,9 +2,18 @@
**Dynamically manage pools of asyncio tasks** **Dynamically manage pools of asyncio tasks**
## Contents
- [Contents](#contents)
- [Summary](#summary)
- [Usage](#usage)
- [Installation](#installation)
- [Dependencies](#dependencies)
- [Testing](#testing)
- [License](#license)
## Summary ## Summary
A task pool is an object with a simple interface for aggregating and dynamically managing asynchronous tasks. A **task pool** is an object with a simple interface for aggregating and dynamically managing asynchronous tasks.
With an interface that is intentionally similar to the [`multiprocessing.Pool`](https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.pool) class from the standard library, the `TaskPool` provides you such methods as `apply`, `map`, and `starmap` to execute coroutines concurrently as [`asyncio.Task`](https://docs.python.org/3/library/asyncio-task.html#task-object) objects. There is no limitation imposed on what kind of tasks can be run or in what combination, when new ones can be added, or when they can be cancelled. With an interface that is intentionally similar to the [`multiprocessing.Pool`](https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.pool) class from the standard library, the `TaskPool` provides you such methods as `apply`, `map`, and `starmap` to execute coroutines concurrently as [`asyncio.Task`](https://docs.python.org/3/library/asyncio-task.html#task-object) objects. There is no limitation imposed on what kind of tasks can be run or in what combination, when new ones can be added, or when they can be cancelled.
@ -22,7 +31,7 @@ from asyncio_taskpool import SimpleTaskPool
... ...
async def work(foo, bar): ... async def work(_foo, _bar): ...
... ...
@ -55,7 +64,7 @@ Python Version 3.8+, tested on Linux
## Testing ## Testing
Install `asyncio-taskpool[dev]` dependencies or just manually install `coverage` with `pip`. Install `asyncio-taskpool[dev]` dependencies or just manually install [`coverage`](https://coverage.readthedocs.io/en/latest/) with `pip`.
Execute the [`./coverage.sh`](coverage.sh) shell script to run all unit tests and receive the coverage report. Execute the [`./coverage.sh`](coverage.sh) shell script to run all unit tests and receive the coverage report.
## License ## License
@ -64,6 +73,6 @@ Execute the [`./coverage.sh`](coverage.sh) shell script to run all unit tests an
The full license texts for the [GNU GPLv3.0](COPYING) and the [GNU LGPLv3.0](COPYING.LESSER) are included in this repository. If not, see https://www.gnu.org/licenses/. The full license texts for the [GNU GPLv3.0](COPYING) and the [GNU LGPLv3.0](COPYING.LESSER) are included in this repository. If not, see https://www.gnu.org/licenses/.
## Copyright ---
© 2022 Daniil Fajnberg © 2022 Daniil Fajnberg

20
docs/Makefile Normal file
View File

@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

35
docs/make.bat Normal file
View File

@ -0,0 +1,35 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd

7
docs/source/api/api.rst Normal file
View File

@ -0,0 +1,7 @@
API
===
.. toctree::
:maxdepth: 4
asyncio_taskpool

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.control.client module
=======================================
.. automodule:: asyncio_taskpool.control.client
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.control.parser module
=======================================
.. automodule:: asyncio_taskpool.control.parser
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,18 @@
asyncio\_taskpool.control package
=================================
.. automodule:: asyncio_taskpool.control
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
asyncio_taskpool.control.client
asyncio_taskpool.control.parser
asyncio_taskpool.control.server
asyncio_taskpool.control.session

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.control.server module
=======================================
.. automodule:: asyncio_taskpool.control.server
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.control.session module
========================================
.. automodule:: asyncio_taskpool.control.session
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.exceptions module
===================================
.. automodule:: asyncio_taskpool.exceptions
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.pool module
=============================
.. automodule:: asyncio_taskpool.pool
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.queue\_context module
=======================================
.. automodule:: asyncio_taskpool.queue_context
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,25 @@
asyncio\_taskpool package
=========================
.. automodule:: asyncio_taskpool
:members:
:undoc-members:
:show-inheritance:
Subpackages
-----------
.. toctree::
:maxdepth: 4
asyncio_taskpool.control
Submodules
----------
.. toctree::
:maxdepth: 4
asyncio_taskpool.exceptions
asyncio_taskpool.pool
asyncio_taskpool.queue_context

60
docs/source/conf.py Normal file
View File

@ -0,0 +1,60 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- Project information -----------------------------------------------------
project = 'asyncio-taskpool'
copyright = '2022 Daniil Fajnberg'
author = 'Daniil Fajnberg'
# The full version, including alpha/beta/rc tags
release = '1.0.0-beta'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.duration',
'sphinx.ext.napoleon'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_theme_options = {
'style_external_links': True,
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']

57
docs/source/index.rst Normal file
View File

@ -0,0 +1,57 @@
.. This file is part of asyncio-taskpool.
.. asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
.. asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
.. You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>.
.. Copyright © 2022 Daniil Fajnberg
Welcome to the asyncio-taskpool documentation!
==============================================
:code:`asyncio-taskpool` is a Python library for dynamically and conveniently managing pools of `asyncio <https://docs.python.org/3/library/asyncio.html>`_ tasks.
Purpose
-------
A `task <https://docs.python.org/3/library/asyncio-task.html>`_ is a very powerful tool of concurrency in the Python world. Since concurrency always implies doing more than one thing a time, you rarely deal with just one :code:`Task` instance. However, managing multiple tasks can become a bit cumbersome quickly, as their number increases. Moreover, especially in long-running code, you may find it useful (or even necessary) to dynamically adjust the extent to which the work is distributed, i.e. increase or decrease the number of tasks.
With that in mind, this library aims to provide two things:
#. An additional layer of abstraction and convenience for managing multiple tasks.
#. A simple interface for dynamically adding and removing tasks when a program is already running.
The first is achieved through the concept of a :doc:`task pool <pages/pool>`. The second is achieved by adding a :doc:`control server <pages/control>` to the task pool.
Installation
------------
.. code-block:: bash
$ pip install asyncio-taskpool
Contents
--------
.. toctree::
:maxdepth: 2
pages/pool
pages/control
api/api
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -0,0 +1,107 @@
.. This file is part of asyncio-taskpool.
.. asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
.. asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
.. You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>.
.. Copyright © 2022 Daniil Fajnberg
Control interface
=================
When you are dealing with programs that run for a long period of time or even as daemons (i.e. indefinitely), having a way to adjust their behavior without needing to stop and restart them can be desirable.
Task pools offer a high degree of flexibility regarding the number and kind of tasks that run within them, by providing methods to easily start and stop tasks and task groups. But without additional tools, they only allow you to establish a control logic *a priori*, as demonstrated in :ref:`this code snippet <simple-control-logic>`.
What if you have a long-running program that executes certain tasks concurrently, but you don't know in advance how many of them you'll need? What if you want to be able to adjust the number of tasks manually **without stopping the task pool**?
The control server
------------------
The :code:`asyncio-taskpool` library comes with a simple control interface for managing task pools that are already running, at the heart of which is the :py:class:`ControlServer <asyncio_taskpool.control.server.ControlServer>`. Any task pool can be passed to a control server. Once the server is running, you can issue commands to it either via TCP or via UNIX socket. The commands map directly to the task pool methods.
To enable control over a :py:class:`SimpleTaskPool <asyncio_taskpool.pool.SimpleTaskPool>` via local TCP port :code:`8001`, all you need to do is this:
.. code-block:: python
:caption: main.py
:name: control-server-minimal
from asyncio_taskpool import SimpleTaskPool
from asyncio_taskpool.control import TCPControlServer
from .work import any_worker_func
async def main():
...
pool = SimpleTaskPool(any_worker_func, kwargs={'foo': 42, 'bar': some_object})
control = await TCPControlServer(pool, host='127.0.0.1', port=8001).serve_forever()
await control
Under the hood, the :py:class:`ControlServer <asyncio_taskpool.control.server.ControlServer>` simply uses :code:`asyncio.start_server` for instantiating a socket server. The resulting control task will run indefinitely. Cancelling the control task stops the server.
In reality, you would probably want some graceful handler for an interrupt signal that cancels any remaining tasks as well as the serving control task.
The control client
------------------
Technically, any process that can read from and write to the socket exposed by the control server, will be able to interact with it. The :code:`asyncio-taskpool` package has its own simple implementation in the form of the :py:class:`ControlClient <asyncio_taskpool.control.client.ControlClient>` that makes it easy to use out of the box.
To start a client, you can use the main script of the :py:mod:`asyncio_taskpool.control` sub-package like this:
.. code-block:: bash
$ python -m asyncio_taskpool.control tcp localhost 8001
This would establish a connection to the control server from the previous example. Calling
.. code-block:: bash
$ python -m asyncio_taskpool.control -h
will display the available client options.
The control session
-------------------
Assuming you connected successfully, you should be greeted by the server with a help message and dropped into a simple input prompt.
.. code-block:: none
Connected to SimpleTaskPool-0
Type '-h' to get help and usage instructions for all available commands.
>
The input sent to the server is handled by a typical argument parser, so the interface should be straight-forward. A command like
.. code-block:: none
> start 5
will call the :py:meth:`.start() <asyncio_taskpool.pool.SimpleTaskPool.start>` method with :code:`5` as an argument and thus start 5 new tasks in the pool, while the command
.. code-block:: none
> pool-size
will call the :py:meth:`.pool_size <asyncio_taskpool.pool.BaseTaskPool.pool_size>` property getter and return the maximum number of tasks you that can run in the pool.
When you are dealing with a regular :py:class:`TaskPool <asyncio_taskpool.pool.TaskPool>` instance, starting new tasks works just fine, as long as the coroutine functions you want to use can be imported into the namespace of the pool. If you have a function named :code:`worker` in the module :code:`mymodule` under the package :code:`mypackage` and want to use it in a :py:meth:`.map() <asyncio_taskpool.pool.TaskPool.map>` call with the arguments :code:`'x'`, :code:`'x'`, and :code:`'z'`, you would do it like this:
.. code-block:: none
> map mypackage.mymodule.worker ['x','y','z'] -g 3
The :code:`-g` is a shorthand for :code:`--group-size` in this case. In general, all (public) pool methods will have a corresponding command in the control session.
.. note::
The :code:`ast.literal_eval` function from the `standard library <https://docs.python.org/3/library/ast.html#ast.literal_eval>`_ is used to safely evaluate the iterable of arguments to work on. For obvious reasons, being able to provide arbitrary python objects in such a control session is neither practical nor secure. The way this is implemented now is limited in that regard, since you can only use Python literals and containers as arguments for your coroutine functions.
To exit a control session, use the :code:`exit` command or simply press :code:`Ctrl + D`.

235
docs/source/pages/pool.rst Normal file
View File

@ -0,0 +1,235 @@
.. This file is part of asyncio-taskpool.
.. asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
.. asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
.. You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>.
.. Copyright © 2022 Daniil Fajnberg
Task pools
==========
What is a task pool?
--------------------
A task pool is an object with a simple interface for aggregating and dynamically managing asynchronous tasks.
To make use of task pools, your code obviously needs to contain coroutine functions (introduced with the :code:`async def` keywords). By adding such functions along with their arguments to a task pool, they are turned into tasks and executed asynchronously.
If you are familiar with the :code:`Pool` class of the `multiprocessing module <https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.pool>`_ from the standard library, then you should feel at home with the :py:class:`TaskPool <asyncio_taskpool.pool.TaskPool>` class. Obviously, there are major conceptual and functional differences between the two, but the methods provided by the :py:class:`TaskPool <asyncio_taskpool.pool.TaskPool>` follow a very similar logic. If you never worked with process or thread pools, don't worry. Task pools are much simpler.
The :code:`TaskPool` class
--------------------------
There are essentially two distinct use cases for a concurrency pool. You want to
#. execute a function *n* times with the same arguments concurrently or
#. execute a function *n* times with different arguments concurrently.
The first is accomplished with the :py:meth:`TaskPool.apply() <asyncio_taskpool.pool.TaskPool.apply>` method, while the second is accomplished with the :py:meth:`TaskPool.map() <asyncio_taskpool.pool.TaskPool.map>` method and its variations :py:meth:`.starmap() <asyncio_taskpool.pool.TaskPool.starmap>` and :py:meth:`.doublestarmap() <asyncio_taskpool.pool.TaskPool.doublestarmap>`.
Let's take a look at an example. Say you have a coroutine function that takes two queues as arguments: The first one being an input-queue (containing items to work on) and the second one being the output queue (for passing on the results to some other function). Your function may look something like this:
.. code-block:: python
:caption: work.py
:name: queue-worker-function
from asyncio.queues import Queue
async def queue_worker_function(in_queue: Queue, out_queue: Queue) -> None:
while True:
item = await in_queue.get()
... # Do some work on the item amd arrive at a result.
await out_queue.put(result)
How would we go about concurrently executing this function, say 5 times? There are (as always) a number of ways to do this with :code:`asyncio`. If we want to use tasks and be clean about it, we can do it like this:
.. code-block:: python
:caption: main.py
from asyncio.tasks import create_task, gather
from .work import queue_worker_function
...
# We assume that the queues have been initialized already.
tasks = []
for _ in range(5):
new_task = create_task(queue_worker_function(q_in, q_out))
tasks.append(new_task)
# Run some other code and let the tasks do their thing.
...
# At some point, we want the tasks to stop waiting for new items and end.
for task in tasks:
task.cancel()
...
await gather(*tasks)
By contrast, here is how you would do it with a task pool:
.. code-block:: python
:caption: main.py
from asyncio_taskpool import TaskPool
from .work import queue_worker_function
...
pool = TaskPool()
group_name = await pool.apply(queue_worker_function, args=(q_in, q_out), num=5)
...
pool.cancel_group(group_name)
...
await pool.flush()
Pretty much self-explanatory, no?
Let's consider a slightly more involved example. Assume you have a coroutine function that takes just one argument (some data) as input, does some work with it (maybe connects to the internet in the process), and eventually writes its results to a database (which is globally defined). Here is how that might look:
.. code-block:: python
:caption: work.py
:name: another-worker-function
from .my_database_stuff import insert_into_results_table
async def another_worker_function(data: object) -> None:
if data.some_attribute > 1:
...
# Do the work, arrive at results.
await insert_into_results_table(results)
Say we have some *iterator* of data-items (of arbitrary length) that we want to be worked on, and say we want 5 coroutines concurrently working on that data. Here is a very naive task-based solution:
.. code-block:: python
:caption: main.py
from asyncio.tasks import create_task, gather
from .work import another_worker_function
async def main():
...
# We got our data_iterator from somewhere.
keep_going = True
while keep_going:
tasks = []
for _ in range(5):
try:
data = next(data_iterator)
except StopIteration:
keep_going = False
break
new_task = create_task(another_worker_function(data))
tasks.append(new_task)
await gather(*tasks)
Here we already run into problems with the task-based approach. The last line in our :code:`while`-loop blocks until **all 5 tasks** return (or raise an exception). This means that as soon as one of them returns, the number of working coroutines is already less than 5 (until all the others return). This can obviously be solved in different ways. We could, for instance, wrap the creation of new tasks itself in a coroutine, which immediately creates a new task, when one is finished, and then call that coroutine 5 times concurrently. Or we could use the queue-based approach from before, but then we would need to write some queue producing coroutine.
Or we could use a task pool:
.. code-block:: python
:caption: main.py
from asyncio_taskpool import TaskPool
from .work import another_worker_function
async def main():
...
pool = TaskPool()
await pool.map(another_worker_function, data_iterator, group_size=5)
...
pool.lock()
await pool.gather_and_close()
Calling the :py:meth:`.map() <asyncio_taskpool.pool.TaskPool.map>` method this way ensures that there will **always** -- i.e. at any given moment in time -- be exactly 5 tasks working concurrently on our data (assuming no other pool interaction).
.. note::
The :py:meth:`.gather_and_close() <asyncio_taskpool.pool.BaseTaskPool.gather_and_close>` line will block until **all the data** has been consumed. (see :ref:`blocking-pool-methods`)
It can't get any simpler than that, can it? So glad you asked...
The :code:`SimpleTaskPool` class
--------------------------------
Let's take the :ref:`queue worker example <queue-worker-function>` from before. If we know that the task pool will only ever work with that one function with the same queue objects, we can make use of the :py:class:`SimpleTaskPool <asyncio_taskpool.pool.SimpleTaskPool>` class:
.. code-block:: python
:caption: main.py
from asyncio_taskpool import SimpleTaskPool
from .work import another_worker_function
async def main():
...
pool = SimpleTaskPool(queue_worker_function, args=(q_in, q_out))
await pool.start(5)
...
pool.stop_all()
...
await pool.gather_and_close()
This may, at first glance, not seem like much of a difference, aside from different method names. However, assume that our main function runs a loop and needs to be able to periodically regulate the number of tasks being executed in the pool based on some additional variables it receives. With the :py:class:`SimpleTaskPool <asyncio_taskpool.pool.SimpleTaskPool>`, this could not be simpler:
.. code-block:: python
:caption: main.py
:name: simple-control-logic
from asyncio_taskpool import SimpleTaskPool
from .work import queue_worker_function
async def main():
...
pool = SimpleTaskPool(queue_worker_function, args=(q_in, q_out))
await pool.start(5)
while True:
...
if some_condition and pool.num_running > 10:
pool.stop(3)
elif some_other_condition and pool.num_running < 5:
pool.start(5)
else:
pool.start(1)
...
await pool.gather_and_close()
Notice how we only specify the function and its arguments during initialization of the pool. From that point on, all we need is the :py:meth:`.start() <asyncio_taskpool.pool.SimpleTaskPool.start>` add :py:meth:`.stop() <asyncio_taskpool.pool.SimpleTaskPool.stop>` methods to adjust the number of concurrently running tasks.
The trade-off here is that this simplified task pool class lacks the flexibility of the regular :py:class:`TaskPool <asyncio_taskpool.pool.TaskPool>` class. On an instance of the latter we can call :py:meth:`.map() <asyncio_taskpool.pool.TaskPool.map>` and :py:meth:`.apply() <asyncio_taskpool.pool.TaskPool.apply>` as often as we like with completely unrelated functions and arguments. With a :py:class:`SimpleTaskPool <asyncio_taskpool.pool.SimpleTaskPool>`, once you initialize it, it is pegged to one function and one set of arguments, and all you can do is control the number of tasks working with those.
This simplified interface becomes particularly useful in conjunction with the :doc:`control server <./control>`.
.. _blocking-pool-methods:
(Non-)Blocking pool methods
---------------------------
One of the main concerns when dealing with concurrent programs in general and with :code:`async` functions in particular is when and how a particular piece of code **blocks** during execution, i.e. delays the execution of the following code significantly.
.. note::
Every statement will block to *some* extent. Obviously, when a program does something, that takes time. This is why the proper question to ask is not *if* but *to what extent, under which circumstances* the execution of a particular line of code blocks.
It is fair to assume that anyone reading this is familiar enough with the concepts of asynchronous programming in Python to know that just slapping :code:`async` in front of a function definition will not magically make it suitable for concurrent execution (in any meaningful way). Therefore, we assume that you are dealing with coroutines that can actually unblock the `event loop <https://docs.python.org/3/library/asyncio-eventloop.html>`_ (e.g. doing a significant amount of I/O).
So how does the task pool behave in that regard?
The only method of a pool that one should **always** assume to be blocking is :py:meth:`.gather_and_close() <asyncio_taskpool.pool.BaseTaskPool.gather_and_close>`. This method awaits **all** tasks in the pool, meaning as long as one of them is still running, this coroutine will not return.
.. warning::
This includes awaiting any callbacks that were passed along with the tasks.
One method to be aware of is :py:meth:`.flush() <asyncio_taskpool.pool.BaseTaskPool.flush>`. Since it will await only those tasks that the pool considers **ended** or **cancelled**, the blocking can only come from any callbacks that were provided for either of those situations.
In general, the act of adding tasks to a pool is non-blocking, no matter which particular methods are used. The only notable exception is when a limit on the pool size has been set and there is "not enough room" to add a task. In this case, both :py:meth:`SimpleTaskPool.start() <asyncio_taskpool.pool.SimpleTaskPool.start>` and :py:meth:`TaskPool.apply() <asyncio_taskpool.pool.TaskPool.apply>` will block until the desired number of new tasks found room in the pool (either because other tasks have ended or because the pool size was increased).
:py:meth:`TaskPool.map() <asyncio_taskpool.pool.TaskPool.map>` (and its variants) will **never** block. Since it makes use of "meta-tasks" under the hood, it will always return immediately. However, if the pool was full when it was called, there is **no guarantee** that even a single task has started, when the method returns.

View File

@ -1,2 +1,4 @@
-r common.txt -r common.txt
coverage coverage
sphinx
sphinx-rtd-theme

View File

@ -1,6 +1,6 @@
[metadata] [metadata]
name = asyncio-taskpool name = asyncio-taskpool
version = 0.4.0 version = 1.0.0-beta
author = Daniil Fajnberg author = Daniil Fajnberg
author_email = mail@daniil.fajnberg.de author_email = mail@daniil.fajnberg.de
description = Dynamically manage pools of asyncio tasks description = Dynamically manage pools of asyncio tasks
@ -9,9 +9,9 @@ long_description_content_type = text/markdown
keywords = asyncio, concurrency, tasks, coroutines, asynchronous, server keywords = asyncio, concurrency, tasks, coroutines, asynchronous, server
url = https://git.fajnberg.de/daniil/asyncio-taskpool url = https://git.fajnberg.de/daniil/asyncio-taskpool
project_urls = project_urls =
Bug Tracker = https://git.fajnberg.de/daniil/asyncio-taskpool/issues Bug Tracker = https://github.com/daniil-berg/asyncio-taskpool/issues
classifiers = classifiers =
Development Status :: 3 - Alpha Development Status :: 4 - Beta
Programming Language :: Python :: 3 Programming Language :: Python :: 3
Operating System :: OS Independent Operating System :: OS Independent
License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3) License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)
@ -30,6 +30,8 @@ python_requires = >=3.8
[options.extras_require] [options.extras_require]
dev = dev =
coverage coverage
sphinx
sphinx-rtd-theme
[options.packages.find] [options.packages.find]
where = src where = src

View File

@ -14,10 +14,5 @@ See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool. You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>.""" If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Brings the main classes up to package level for import convenience.
"""
from .pool import TaskPool, SimpleTaskPool from .pool import TaskPool, SimpleTaskPool
from .server import UnixControlServer

View File

@ -1,67 +0,0 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
CLI client entry point.
"""
import sys
from argparse import ArgumentParser
from asyncio import run
from pathlib import Path
from typing import Dict, Any
from .client import ControlClient, UnixControlClient
from .constants import PACKAGE_NAME
from .pool import TaskPool
from .server import ControlServer
CONN_TYPE = 'conn_type'
UNIX, TCP = 'unix', 'tcp'
SOCKET_PATH = 'path'
def parse_cli() -> Dict[str, Any]:
parser = ArgumentParser(
prog=PACKAGE_NAME,
description=f"CLI based {ControlClient.__name__} for {PACKAGE_NAME}"
)
subparsers = parser.add_subparsers(title="Connection types", dest=CONN_TYPE)
unix_parser = subparsers.add_parser(UNIX, help="Connect via unix socket")
unix_parser.add_argument(
SOCKET_PATH,
type=Path,
help=f"Path to the unix socket on which the {ControlServer.__name__} for the {TaskPool.__name__} is listening."
)
return vars(parser.parse_args())
async def main():
kwargs = parse_cli()
if kwargs[CONN_TYPE] == UNIX:
client = UnixControlClient(socket_path=kwargs[SOCKET_PATH])
elif kwargs[CONN_TYPE] == TCP:
# TODO: Implement the TCP client class
client = UnixControlClient(socket_path=kwargs[SOCKET_PATH])
else:
print("Invalid connection type", file=sys.stderr)
sys.exit(2)
await client.start()
if __name__ == '__main__':
run(main())

View File

@ -0,0 +1,2 @@
from .server import TCPControlServer, UnixControlServer
from .client import TCPControlClient, UnixControlClient

View File

@ -0,0 +1,80 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
CLI entry point script for a :class:`ControlClient`.
"""
from argparse import ArgumentParser
from asyncio import run
from pathlib import Path
from typing import Any, Dict, Sequence
from ..internals.constants import PACKAGE_NAME
from ..pool import TaskPool
from .client import TCPControlClient, UnixControlClient
from .server import TCPControlServer, UnixControlServer
__all__ = []
CLIENT_CLASS = 'client_class'
UNIX, TCP = 'unix', 'tcp'
SOCKET_PATH = 'path'
HOST, PORT = 'host', 'port'
def parse_cli(args: Sequence[str] = None) -> Dict[str, Any]:
parser = ArgumentParser(
prog=f'{PACKAGE_NAME}.control',
description=f"Simple CLI based control client for {PACKAGE_NAME}"
)
subparsers = parser.add_subparsers(title="Connection types")
tcp_parser = subparsers.add_parser(TCP, help="Connect via TCP socket")
tcp_parser.add_argument(
HOST,
help=f"IP address or url that the {TCPControlServer.__name__} for the {TaskPool.__name__} is listening on."
)
tcp_parser.add_argument(
PORT,
type=int,
help=f"Port that the {TCPControlServer.__name__} for the {TaskPool.__name__} is listening on."
)
tcp_parser.set_defaults(**{CLIENT_CLASS: TCPControlClient})
unix_parser = subparsers.add_parser(UNIX, help="Connect via unix socket")
unix_parser.add_argument(
SOCKET_PATH,
type=Path,
help=f"Path to the unix socket on which the {UnixControlServer.__name__} for the {TaskPool.__name__} is "
f"listening."
)
unix_parser.set_defaults(**{CLIENT_CLASS: UnixControlClient})
return vars(parser.parse_args(args))
async def main():
kwargs = parse_cli()
client_cls = kwargs.pop(CLIENT_CLASS)
await client_cls(**kwargs).start()
if __name__ == '__main__':
run(main())

View File

@ -23,17 +23,28 @@ import json
import shutil import shutil
import sys import sys
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from asyncio.streams import StreamReader, StreamWriter, open_unix_connection from asyncio.streams import StreamReader, StreamWriter, open_connection
from pathlib import Path from pathlib import Path
from typing import Optional from typing import Optional, Union
from .constants import CLIENT_EXIT, CLIENT_INFO, SESSION_MSG_BYTES from ..internals.constants import CLIENT_INFO, SESSION_MSG_BYTES
from .types import ClientConnT, PathT from ..internals.types import ClientConnT, PathT
__all__ = [
'ControlClient',
'TCPControlClient',
'UnixControlClient',
'CLIENT_EXIT'
]
CLIENT_EXIT = 'exit'
class ControlClient(ABC): class ControlClient(ABC):
""" """
Abstract base class for a simple implementation of a task pool control client. Abstract base class for a simple implementation of a pool control client.
Since the server's control interface is simply expecting commands to be sent, any process able to connect to the Since the server's control interface is simply expecting commands to be sent, any process able to connect to the
TCP or UNIX socket and issue the relevant commands (and optionally read the responses) will work just as well. TCP or UNIX socket and issue the relevant commands (and optionally read the responses) will work just as well.
@ -41,7 +52,7 @@ class ControlClient(ABC):
""" """
@staticmethod @staticmethod
def client_info() -> dict: def _client_info() -> dict:
"""Returns a dictionary of client information relevant for the handshake with the server.""" """Returns a dictionary of client information relevant for the handshake with the server."""
return {CLIENT_INFO.TERMINAL_WIDTH: shutil.get_terminal_size().columns} return {CLIENT_INFO.TERMINAL_WIDTH: shutil.get_terminal_size().columns}
@ -50,15 +61,15 @@ class ControlClient(ABC):
""" """
Tries to connect to a socket using the provided arguments and return the associated reader-writer-pair. Tries to connect to a socket using the provided arguments and return the associated reader-writer-pair.
This method will be invoked by the public `start()` method with the pre-defined internal `_conn_kwargs` (unpacked) This method will be invoked by the public `start()` method with the pre-defined internal `_conn_kwargs`
as keyword-arguments. (unpacked) as keyword-arguments.
This method should return either a tuple of `asyncio.StreamReader` and `asyncio.StreamWriter` or a tuple of This method should return either a tuple of `asyncio.StreamReader` and `asyncio.StreamWriter` or a tuple of
`None` and `None`, if it failed to establish the defined connection. `None` and `None`, if it failed to establish the defined connection.
""" """
raise NotImplementedError raise NotImplementedError
def __init__(self, **conn_kwargs) -> None: def __init__(self, **conn_kwargs) -> None:
"""Simply stores the connection keyword-arguments necessary for opening the connection.""" """Simply stores the keyword-arguments for opening the connection."""
self._conn_kwargs = conn_kwargs self._conn_kwargs = conn_kwargs
self._connected: bool = False self._connected: bool = False
@ -73,9 +84,10 @@ class ControlClient(ABC):
writer: The `asyncio.StreamWriter` returned by the `_open_connection()` method writer: The `asyncio.StreamWriter` returned by the `_open_connection()` method
""" """
self._connected = True self._connected = True
writer.write(json.dumps(self.client_info()).encode()) writer.write(json.dumps(self._client_info()).encode())
await writer.drain() await writer.drain()
print("Connected to", (await reader.read(SESSION_MSG_BYTES)).decode()) print("Connected to", (await reader.read(SESSION_MSG_BYTES)).decode())
print("Type '-h' to get help and usage instructions for all available commands.\n")
def _get_command(self, writer: StreamWriter) -> Optional[str]: def _get_command(self, writer: StreamWriter) -> Optional[str]:
""" """
@ -90,7 +102,7 @@ class ControlClient(ABC):
""" """
try: try:
msg = input("> ").strip().lower() msg = input("> ").strip().lower()
except EOFError: # Ctrl+D shall be equivalent to the `CLIENT_EXIT` command. except EOFError: # Ctrl+D shall be equivalent to the :const:`CLIENT_EXIT` command.
msg = CLIENT_EXIT msg = CLIENT_EXIT
except KeyboardInterrupt: # Ctrl+C shall simply reset to the input prompt. except KeyboardInterrupt: # Ctrl+C shall simply reset to the input prompt.
print() print()
@ -128,11 +140,14 @@ class ControlClient(ABC):
async def start(self) -> None: async def start(self) -> None:
""" """
This method opens the pre-defined connection, performs the server-handshake, and enters the interaction loop. Opens connection, performs handshake, and enters interaction loop.
An input prompt is presented to the user and any input is sent (encoded) to the connected server.
One exception is the :const:`CLIENT_EXIT` command (equivalent to Ctrl+D), which merely closes the connection.
If the connection can not be established, an error message is printed to `stderr` and the method returns. If the connection can not be established, an error message is printed to `stderr` and the method returns.
If the `_connected` flag is set to `False` during the interaction loop, the method returns and prints out a If either the exit command is issued or the connection to the server is lost during the interaction loop,
disconnected-message. the method returns and prints out a disconnected-message.
""" """
reader, writer = await self._open_connection(**self._conn_kwargs) reader, writer = await self._open_connection(**self._conn_kwargs)
if reader is None: if reader is None:
@ -144,15 +159,36 @@ class ControlClient(ABC):
print("Disconnected from control server.") print("Disconnected from control server.")
class TCPControlClient(ControlClient):
"""Task pool control client for connecting to a :class:`TCPControlServer`."""
def __init__(self, host: str, port: Union[int, str], **conn_kwargs) -> None:
"""`host` and `port` are expected as non-optional connection arguments."""
self._host = host
self._port = port
super().__init__(**conn_kwargs)
async def _open_connection(self, **kwargs) -> ClientConnT:
"""
Wrapper around the `asyncio.open_connection` function.
Returns a tuple of `None` and `None`, if the connection can not be established;
otherwise, the stream-reader and -writer tuple is returned.
"""
try:
return await open_connection(self._host, self._port, **kwargs)
except ConnectionError as e:
print(str(e), file=sys.stderr)
return None, None
class UnixControlClient(ControlClient): class UnixControlClient(ControlClient):
"""Task pool control client that expects a unix socket to be exposed by the control server.""" """Task pool control client for connecting to a :class:`UnixControlServer`."""
def __init__(self, socket_path: PathT, **conn_kwargs) -> None: def __init__(self, socket_path: PathT, **conn_kwargs) -> None:
""" """`socket_path` is expected as a non-optional connection argument."""
In addition to what the base class does, the `socket_path` is expected as a non-optional argument. from asyncio.streams import open_unix_connection
self._open_unix_connection = open_unix_connection
The `_socket_path` attribute is set to the `Path` object created from the `socket_path` argument.
"""
self._socket_path = Path(socket_path) self._socket_path = Path(socket_path)
super().__init__(**conn_kwargs) super().__init__(**conn_kwargs)
@ -164,7 +200,7 @@ class UnixControlClient(ControlClient):
otherwise, the stream-reader and -writer tuple is returned. otherwise, the stream-reader and -writer tuple is returned.
""" """
try: try:
return await open_unix_connection(self._socket_path, **kwargs) return await self._open_unix_connection(self._socket_path, **kwargs)
except FileNotFoundError: except FileNotFoundError:
print("No socket at", self._socket_path, file=sys.stderr) print("No socket at", self._socket_path, file=sys.stderr)
return None, None return None, None

View File

@ -0,0 +1,325 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Definition of the :class:`ControlParser` used in a
:class:`ControlSession <asyncio_taskpool.control.session.ControlSession>`.
"""
from argparse import Action, ArgumentParser, ArgumentDefaultsHelpFormatter, HelpFormatter, SUPPRESS
from ast import literal_eval
from asyncio.streams import StreamWriter
from inspect import Parameter, getmembers, isfunction, signature
from shutil import get_terminal_size
from typing import Any, Callable, Container, Dict, Iterable, Set, Type, TypeVar
from ..exceptions import HelpRequested, ParserError
from ..internals.constants import CLIENT_INFO, CMD, STREAM_WRITER
from ..internals.helpers import get_first_doc_line, resolve_dotted_path
from ..internals.types import ArgsT, CancelCB, CoroutineFunc, EndCB, KwArgsT
__all__ = ['ControlParser']
FmtCls = TypeVar('FmtCls', bound=Type[HelpFormatter])
ParsersDict = Dict[str, 'ControlParser']
OMIT_PARAMS_DEFAULT = ('self', )
NAME, PROG, HELP, DESCRIPTION = 'name', 'prog', 'help', 'description'
class ControlParser(ArgumentParser):
"""
Subclass of the standard :code:`argparse.ArgumentParser` for pool control.
Such a parser is not supposed to ever print to stdout/stderr, but instead direct all messages to a `StreamWriter`
instance passed to it during initialization.
Furthermore, it requires defining the width of the terminal, to adjust help formatting to the terminal size of a
connected client.
Finally, it offers some convenience methods and makes use of custom exceptions.
"""
@staticmethod
def help_formatter_factory(terminal_width: int, base_cls: FmtCls = None) -> FmtCls:
"""
Constructs and returns a subclass of :class:`argparse.HelpFormatter`
The formatter class will have the defined `terminal_width`.
Although a custom formatter class can be explicitly passed into the :class:`ArgumentParser` constructor,
this is not as convenient, when making use of sub-parsers.
Args:
terminal_width:
The number of columns of the terminal to which to adjust help formatting.
base_cls (optional):
Base class to use for inheritance. By default :class:`argparse.ArgumentDefaultsHelpFormatter` is used.
Returns:
The subclass of `base_cls` which fixes the constructor's `width` keyword-argument to `terminal_width`.
"""
if base_cls is None:
base_cls = ArgumentDefaultsHelpFormatter
class ClientHelpFormatter(base_cls):
def __init__(self, *args, **kwargs) -> None:
kwargs['width'] = terminal_width
super().__init__(*args, **kwargs)
return ClientHelpFormatter
def __init__(self, stream_writer: StreamWriter, terminal_width: int = None, **kwargs) -> None:
"""
Sets some internal attributes in addition to the base class.
Args:
stream_writer:
The instance of the :class:`asyncio.StreamWriter` to use for message output.
terminal_width (optional):
The terminal width to use for all message formatting. By default the :code:`columns` attribute from
:func:`shutil.get_terminal_size` is taken.
**kwargs(optional):
Passed to the parent class constructor. The exception is the `formatter_class` parameter: Even if a
class is specified, it will always be subclassed in the :meth:`help_formatter_factory`.
Also, by default, `exit_on_error` is set to `False` (as opposed to how the parent class handles it).
"""
self._stream_writer: StreamWriter = stream_writer
self._terminal_width: int = terminal_width if terminal_width is not None else get_terminal_size().columns
kwargs['formatter_class'] = self.help_formatter_factory(self._terminal_width, kwargs.get('formatter_class'))
kwargs.setdefault('exit_on_error', False)
super().__init__(**kwargs)
self._flags: Set[str] = set()
self._commands = None
def add_function_command(self, function: Callable, omit_params: Container[str] = OMIT_PARAMS_DEFAULT,
**subparser_kwargs) -> 'ControlParser':
"""
Takes a function and adds a corresponding (sub-)command to the parser.
The :meth:`add_subparsers` method must have been called prior to this.
NOTE: Currently, only a limited spectrum of parameters can be accurately converted to parser arguments.
This method works correctly with any public method of the any task pool class.
Args:
function:
The reference to the function to be "converted" to a parser command.
omit_params (optional):
Names of function parameters not to add as parser arguments.
**subparser_kwargs (optional):
Passed directly to the :meth:`add_parser` method.
Returns:
The subparser instance created from the function.
"""
subparser_kwargs.setdefault(NAME, function.__name__.replace('_', '-'))
subparser_kwargs.setdefault(PROG, subparser_kwargs[NAME])
subparser_kwargs.setdefault(HELP, get_first_doc_line(function))
subparser_kwargs.setdefault(DESCRIPTION, subparser_kwargs[HELP])
subparser: ControlParser = self._commands.add_parser(**subparser_kwargs)
subparser.add_function_args(function, omit_params)
return subparser
def add_property_command(self, prop: property, cls_name: str = '', **subparser_kwargs) -> 'ControlParser':
"""
Same as the :meth:`add_function_command` method, but for properties.
Args:
prop:
The reference to the property to be "converted" to a parser command.
cls_name (optional):
Name of the class the property is defined on to appear in the command help text.
**subparser_kwargs (optional):
Passed directly to the :meth:`add_parser` method.
Returns:
The subparser instance created from the property.
"""
subparser_kwargs.setdefault(NAME, prop.fget.__name__.replace('_', '-'))
subparser_kwargs.setdefault(PROG, subparser_kwargs[NAME])
getter_help = get_first_doc_line(prop.fget)
if prop.fset is None:
subparser_kwargs.setdefault(HELP, getter_help)
else:
subparser_kwargs.setdefault(HELP, f"Get/set the `{cls_name}.{subparser_kwargs[NAME]}` property")
subparser_kwargs.setdefault(DESCRIPTION, subparser_kwargs[HELP])
subparser: ControlParser = self._commands.add_parser(**subparser_kwargs)
if prop.fset is not None:
_, param = signature(prop.fset).parameters.values()
setter_arg_help = f"If provided: {get_first_doc_line(prop.fset)} If omitted: {getter_help}"
subparser.add_function_arg(param, nargs='?', default=SUPPRESS, help=setter_arg_help)
return subparser
def add_class_commands(self, cls: Type, public_only: bool = True, omit_members: Container[str] = (),
member_arg_name: str = CMD) -> ParsersDict:
"""
Adds methods/properties of a class as (sub-)commands to the parser.
The :meth:`add_subparsers` method must have been called prior to this.
NOTE: Currently, only a limited spectrum of function parameters can be accurately converted to parser arguments.
This method works correctly with any task pool class.
Args:
cls:
The reference to the class whose methods/properties are to be "converted" to parser commands.
public_only (optional):
If `False`, protected and private members are considered as well. `True` by default.
omit_members (optional):
Names of functions/properties not to add as parser commands.
member_arg_name (optional):
After parsing the arguments, depending on which command was invoked by the user, the corresponding
method/property will be stored as an extra argument in the parsed namespace under this attribute name.
Returns:
Dictionary mapping class member names to the (sub-)parsers created from them.
"""
parsers: ParsersDict = {}
common_kwargs = {STREAM_WRITER: self._stream_writer, CLIENT_INFO.TERMINAL_WIDTH: self._terminal_width}
for name, member in getmembers(cls):
if name in omit_members or (name.startswith('_') and public_only):
continue
if isfunction(member):
subparser = self.add_function_command(member, **common_kwargs)
elif isinstance(member, property):
subparser = self.add_property_command(member, cls.__name__, **common_kwargs)
else:
continue
subparser.set_defaults(**{member_arg_name: member})
parsers[name] = subparser
return parsers
def add_subparsers(self, *args, **kwargs):
"""Adds the subparsers action as an attribute before returning it."""
self._commands = super().add_subparsers(*args, **kwargs)
return self._commands
def _print_message(self, message: str, *args, **kwargs) -> None:
"""This is overridden to ensure that no messages are sent to stdout/stderr, but always to the stream writer."""
if message:
self._stream_writer.write(message.encode())
def exit(self, status: int = 0, message: str = None) -> None:
"""This is overridden to prevent system exit to be invoked."""
if message:
self._print_message(message)
def error(self, message: str) -> None:
"""Raises the :exc:`ParserError <asyncio_taskpool.exceptions.ParserError>` exception at the end."""
super().error(message=message)
raise ParserError
def print_help(self, file=None) -> None:
"""Raises the :exc:`HelpRequested <asyncio_taskpool.exceptions.HelpRequested>` exception at the end."""
super().print_help(file)
raise HelpRequested
def add_function_arg(self, parameter: Parameter, **kwargs) -> Action:
"""
Takes an :class:`inspect.Parameter` and adds a corresponding parser argument.
NOTE: Currently, only a limited spectrum of parameters can be accurately converted to a parser argument.
This method works correctly with any parameter of any public method any task pool class.
Args:
parameter: The :class:`inspect.Parameter` object to be converted to a parser argument.
**kwargs: Passed to the :meth:`add_argument` method of the base class.
Returns:
The :class:`argparse.Action` returned by the :meth:`add_argument` method.
"""
if parameter.default is Parameter.empty:
# A non-optional function parameter should correspond to a positional argument.
name_or_flags = [parameter.name]
else:
flag = None
long = f'--{parameter.name.replace("_", "-")}'
# We try to generate a short version (flag) for the argument.
letter = parameter.name[0]
if letter not in self._flags:
flag = f'-{letter}'
self._flags.add(letter)
elif letter.upper() not in self._flags:
flag = f'-{letter.upper()}'
self._flags.add(letter.upper())
name_or_flags = [long] if flag is None else [flag, long]
if parameter.annotation is bool:
# If we are dealing with a boolean parameter, always use the 'store_true' action.
# Even if the parameter's default value is `True`, this will make the parser argument's default `False`.
kwargs.setdefault('action', 'store_true')
else:
# For now, any other type annotation will implicitly use the default action 'store'.
# In addition, we always set the default value.
kwargs.setdefault('default', parameter.default)
if parameter.kind == Parameter.VAR_POSITIONAL:
# This is to be able to later unpack an arbitrary number of positional arguments.
kwargs.setdefault('nargs', '*')
if not kwargs.get('action') == 'store_true':
# Set the type from the parameter annotation.
kwargs.setdefault('type', _get_type_from_annotation(parameter.annotation))
return self.add_argument(*name_or_flags, **kwargs)
def add_function_args(self, function: Callable, omit: Container[str] = OMIT_PARAMS_DEFAULT) -> None:
"""
Takes a function and adds its parameters as arguments to the parser.
NOTE: Currently, only a limited spectrum of parameters can be accurately converted to a parser argument.
This method works correctly with any public method of any task pool class.
Args:
function:
The function whose parameters are to be converted to parser arguments.
Its parameters must be properly annotated.
omit (optional):
Names of function parameters not to add as parser arguments.
"""
for param in signature(function).parameters.values():
if param.name not in omit:
# TODO: Look into parsing docstrings properly to try and extract argument help text.
# For now, the argument help just shows the type it will be converted to.
self.add_function_arg(param, help=repr(param.annotation))
def _get_arg_type_wrapper(cls: Type) -> Callable[[Any], Any]:
"""
Returns a wrapper for the constructor of `cls` to avoid a ValueError being raised on suppressed arguments.
See: https://bugs.python.org/issue36078
"""
def wrapper(arg: Any) -> Any: return arg if arg is SUPPRESS else cls(arg)
# Copy the name of the class to maintain useful help messages when incorrect arguments are passed.
wrapper.__name__ = cls.__name__
return wrapper
def _get_type_from_annotation(annotation: Type) -> Callable[[Any], Any]:
"""
Returns a type conversion function based on the `annotation` passed.
Required to properly convert parsed arguments to the type expected by certain pool methods.
Each conversion function is wrapped by `_get_arg_type_wrapper`.
`Callable`-type annotations give the `resolve_dotted_path` function.
`Iterable`- or args/kwargs-type annotations give the `ast.literal_eval` function.
Others pass unchanged (but still wrapped with `_get_arg_type_wrapper`).
"""
if any(annotation is t for t in {CoroutineFunc, EndCB, CancelCB}):
annotation = resolve_dotted_path
if any(annotation is t for t in {ArgsT, KwArgsT, Iterable[ArgsT], Iterable[KwArgsT]}):
annotation = literal_eval
return _get_arg_type_wrapper(annotation)

View File

@ -15,7 +15,7 @@ You should have received a copy of the GNU Lesser General Public License along w
If not, see <https://www.gnu.org/licenses/>.""" If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """ __doc__ = """
This module contains the task pool control server class definitions. Task pool control server class definitions.
""" """
@ -23,35 +23,73 @@ import logging
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from asyncio import AbstractServer from asyncio import AbstractServer
from asyncio.exceptions import CancelledError from asyncio.exceptions import CancelledError
from asyncio.streams import StreamReader, StreamWriter, start_unix_server from asyncio.streams import StreamReader, StreamWriter, start_server
from asyncio.tasks import Task, create_task from asyncio.tasks import Task, create_task
from pathlib import Path from pathlib import Path
from typing import Optional, Union from typing import Optional, Union
from .client import ControlClient, UnixControlClient from .client import ControlClient, TCPControlClient, UnixControlClient
from .pool import TaskPool, SimpleTaskPool
from .session import ControlSession from .session import ControlSession
from .types import ConnectedCallbackT from ..pool import AnyTaskPoolT
from ..internals.types import ConnectedCallbackT, PathT
__all__ = ['ControlServer', 'TCPControlServer', 'UnixControlServer']
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
class ControlServer(ABC): # TODO: Implement interface for normal TaskPool instances, not just SimpleTaskPool class ControlServer(ABC):
""" """
Abstract base class for a task pool control server. Abstract base class for a task pool control server.
This class acts as a wrapper around an async server instance and initializes a `ControlSession` upon a client This class acts as a wrapper around an async server instance and initializes a
connecting to it. The entire interface is defined within that session class. :class:`ControlSession <asyncio_taskpool.control.session.ControlSession>` once a client connects to it.
The interface is defined within the session class.
""" """
_client_class = ControlClient _client_class = ControlClient
@classmethod @classmethod
@property @property
def client_class_name(cls) -> str: def client_class_name(cls) -> str:
"""Returns the name of the control client class matching the server class.""" """Returns the name of the matching control client class."""
return cls._client_class.__name__ return cls._client_class.__name__
def __init__(self, pool: AnyTaskPoolT, **server_kwargs) -> None:
"""
Merely sets internal attributes, but does not start the server yet.
The task pool must be passed here and can not be set/changed afterwards. This means a control server is always
tied to one specific task pool.
Args:
pool:
An instance of a `BaseTaskPool` subclass to tie the server to.
**server_kwargs (optional):
Keyword arguments that will be passed into the function that starts the server.
"""
self._pool: AnyTaskPoolT = pool
self._server_kwargs = server_kwargs
self._server: Optional[AbstractServer] = None
@property
def pool(self) -> AnyTaskPoolT:
"""The task pool instance controlled by the server."""
return self._pool
def is_serving(self) -> bool:
"""Wrapper around the `asyncio.Server.is_serving` method."""
return self._server.is_serving()
async def _client_connected_cb(self, reader: StreamReader, writer: StreamWriter) -> None:
"""
The universal client callback that will be passed into the `_get_server_instance` method.
Instantiates a control session, performs the client handshake, and enters the session's `listen` loop.
"""
session = ControlSession(self, reader, writer)
await session.client_handshake()
await session.listen()
@abstractmethod @abstractmethod
async def _get_server_instance(self, client_connected_cb: ConnectedCallbackT, **kwargs) -> AbstractServer: async def _get_server_instance(self, client_connected_cb: ConnectedCallbackT, **kwargs) -> AbstractServer:
""" """
@ -74,40 +112,6 @@ class ControlServer(ABC): # TODO: Implement interface for normal TaskPool insta
"""The method to run after the server's `serve_forever` methods ends for whatever reason.""" """The method to run after the server's `serve_forever` methods ends for whatever reason."""
raise NotImplementedError raise NotImplementedError
def __init__(self, pool: Union[TaskPool, SimpleTaskPool], **server_kwargs) -> None:
"""
Initializes by merely saving the internal attributes, but without starting the server yet.
The task pool must be passed here and can not be set/changed afterwards. This means a control server is always
tied to one specific task pool.
Args:
pool:
An instance of a `BaseTaskPool` subclass to tie the server to.
**server_kwargs (optional):
Keyword arguments that will be passed into the function that starts the server.
"""
self._pool: Union[TaskPool, SimpleTaskPool] = pool
self._server_kwargs = server_kwargs
self._server: Optional[AbstractServer] = None
@property
def pool(self) -> Union[TaskPool, SimpleTaskPool]:
"""Read-only property for accessing the task pool instance controlled by the server."""
return self._pool
def is_serving(self) -> bool:
"""Wrapper around the `asyncio.Server.is_serving` method."""
return self._server.is_serving()
async def _client_connected_cb(self, reader: StreamReader, writer: StreamWriter) -> None:
"""
The universal client callback that will be passed into the `_get_server_instance` method.
Instantiates a control session, performs the client handshake, and enters the session's `listen` loop.
"""
session = ControlSession(self, reader, writer)
await session.client_handshake()
await session.listen()
async def _serve_forever(self) -> None: async def _serve_forever(self) -> None:
""" """
To be run as an `asyncio.Task` by the following method. To be run as an `asyncio.Task` by the following method.
@ -124,24 +128,50 @@ class ControlServer(ABC): # TODO: Implement interface for normal TaskPool insta
async def serve_forever(self) -> Task: async def serve_forever(self) -> Task:
""" """
This method actually starts the server and begins listening to client connections on the specified interface. Starts the server and begins listening to client connections.
It should never block because the serving will be performed in a separate task. It should never block because the serving will be performed in a separate task.
Returns:
The forever serving task. To stop the server, this task should be cancelled.
""" """
log.debug("Starting %s...", self.__class__.__name__) log.debug("Starting %s...", self.__class__.__name__)
self._server = await self._get_server_instance(self._client_connected_cb, **self._server_kwargs) self._server = await self._get_server_instance(self._client_connected_cb, **self._server_kwargs)
return create_task(self._serve_forever()) return create_task(self._serve_forever())
class UnixControlServer(ControlServer): class TCPControlServer(ControlServer):
"""Task pool control server class that exposes a unix socket for control clients to connect to.""" """Exposes a TCP socket for control clients to connect to."""
_client_class = UnixControlClient _client_class = TCPControlClient
def __init__(self, pool: SimpleTaskPool, **server_kwargs) -> None: def __init__(self, pool: AnyTaskPoolT, host: str, port: Union[int, str], **server_kwargs) -> None:
self._socket_path = Path(server_kwargs.pop('path')) """`host` and `port` are expected as non-optional server arguments."""
self._host = host
self._port = port
super().__init__(pool, **server_kwargs) super().__init__(pool, **server_kwargs)
async def _get_server_instance(self, client_connected_cb: ConnectedCallbackT, **kwargs) -> AbstractServer: async def _get_server_instance(self, client_connected_cb: ConnectedCallbackT, **kwargs) -> AbstractServer:
server = await start_unix_server(client_connected_cb, self._socket_path, **kwargs) server = await start_server(client_connected_cb, self._host, self._port, **kwargs)
log.debug("Opened socket at %s:%s", self._host, self._port)
return server
def _final_callback(self) -> None:
log.debug("Closed socket at %s:%s", self._host, self._port)
class UnixControlServer(ControlServer):
"""Exposes a unix socket for control clients to connect to."""
_client_class = UnixControlClient
def __init__(self, pool: AnyTaskPoolT, socket_path: PathT, **server_kwargs) -> None:
"""`socket_path` is expected as a non-optional server argument."""
from asyncio.streams import start_unix_server
self._start_unix_server = start_unix_server
self._socket_path = Path(socket_path)
super().__init__(pool, **server_kwargs)
async def _get_server_instance(self, client_connected_cb: ConnectedCallbackT, **kwargs) -> AbstractServer:
server = await self._start_unix_server(client_connected_cb, self._socket_path, **kwargs)
log.debug("Opened socket '%s'", str(self._socket_path)) log.debug("Opened socket '%s'", str(self._socket_path))
return server return server

View File

@ -0,0 +1,191 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Definition of the :class:`ControlSession` used by a :class:`ControlServer`.
"""
import logging
import json
from argparse import ArgumentError
from asyncio.streams import StreamReader, StreamWriter
from inspect import isfunction, signature
from typing import Callable, Optional, Union, TYPE_CHECKING
from .parser import ControlParser
from ..exceptions import CommandError, HelpRequested, ParserError
from ..pool import TaskPool, SimpleTaskPool
from ..internals.constants import CLIENT_INFO, CMD, CMD_OK, SESSION_MSG_BYTES, STREAM_WRITER
from ..internals.helpers import return_or_exception
if TYPE_CHECKING:
from .server import ControlServer
__all__ = ['ControlSession']
log = logging.getLogger(__name__)
class ControlSession:
"""
Manages a single control session between a server and a client.
The commands received from a connected client are translated into method calls on the task pool instance.
A subclass of the standard :class:`argparse.ArgumentParser` is used to handle the input read from the stream.
"""
def __init__(self, server: 'ControlServer', reader: StreamReader, writer: StreamWriter) -> None:
"""
Connection to the control server should already been established.
For more convenient/efficient access, some of the server's properties are saved in separate attributes.
The argument parser is _not_ instantiated in the constructor. It requires a bit of client information during
initialization, which is obtained in the `client_handshake` method; only there is the parser fully configured.
Args:
server:
The instance of a :class:`ControlServer` subclass starting the session.
reader:
The `asyncio.StreamReader` created when a client connected to the server.
writer:
The `asyncio.StreamWriter` created when a client connected to the server.
"""
self._control_server: 'ControlServer' = server
self._pool: Union[TaskPool, SimpleTaskPool] = server.pool
self._client_class_name = server.client_class_name
self._reader: StreamReader = reader
self._writer: StreamWriter = writer
self._parser: Optional[ControlParser] = None
async def _exec_method_and_respond(self, method: Callable, **kwargs) -> None:
"""
Takes a pool method reference, executes it, and writes a response accordingly.
If the first parameter is named `self`, the method will be called with the `_pool` instance as its first
positional argument.
If it returns nothing, the response upon successful execution will be :const:`constants.CMD_OK`, otherwise the
response written to the stream will be its return value (as an encoded string).
Args:
prop:
The reference to the method defined on the `_pool` instance's class.
**kwargs (optional):
Must correspond to the arguments expected by the `method`.
Correctly unpacks arbitrary-length positional and keyword-arguments.
"""
log.debug("%s calls %s.%s", self._client_class_name, self._pool.__class__.__name__, method.__name__)
normal_pos, var_pos = [], []
for param in signature(method).parameters.values():
if param.name == 'self':
normal_pos.append(self._pool)
elif param.kind in (param.POSITIONAL_OR_KEYWORD, param.POSITIONAL_ONLY):
normal_pos.append(kwargs.pop(param.name))
elif param.kind == param.VAR_POSITIONAL:
var_pos = kwargs.pop(param.name)
output = await return_or_exception(method, *normal_pos, *var_pos, **kwargs)
self._writer.write(CMD_OK if output is None else str(output).encode())
async def _exec_property_and_respond(self, prop: property, **kwargs) -> None:
"""
Takes a pool property reference, executes its setter or getter, and writes a response accordingly.
The property set/get method will always be called with the `_pool` instance as its first positional argument.
Args:
prop:
The reference to the property defined on the `_pool` instance's class.
**kwargs (optional):
If not empty, the property setter is executed and the keyword arguments are passed along to it; the
response upon successful execution will be :const:`constants.CMD_OK`. Otherwise the property getter is
executed and the response written to the stream will be its return value (as an encoded string).
"""
if kwargs:
log.debug("%s sets %s.%s", self._client_class_name, self._pool.__class__.__name__, prop.fset.__name__)
await return_or_exception(prop.fset, self._pool, **kwargs)
self._writer.write(CMD_OK)
else:
log.debug("%s gets %s.%s", self._client_class_name, self._pool.__class__.__name__, prop.fget.__name__)
self._writer.write(str(await return_or_exception(prop.fget, self._pool)).encode())
async def client_handshake(self) -> None:
"""
Must be invoked before starting any other client interaction.
Client info is retrieved, server info is sent back, and the
:class:`ControlParser <asyncio_taskpool.control.parser.ControlParser>` is set up.
"""
client_info = json.loads((await self._reader.read(SESSION_MSG_BYTES)).decode().strip())
log.debug("%s connected", self._client_class_name)
parser_kwargs = {
STREAM_WRITER: self._writer,
CLIENT_INFO.TERMINAL_WIDTH: client_info[CLIENT_INFO.TERMINAL_WIDTH],
'prog': '',
'usage': f'[-h] [{CMD}] ...'
}
self._parser = ControlParser(**parser_kwargs)
self._parser.add_subparsers(title="Commands",
metavar="(A command followed by '-h' or '--help' will show command-specific help.)")
self._parser.add_class_commands(self._pool.__class__)
self._writer.write(str(self._pool).encode())
await self._writer.drain()
async def _parse_command(self, msg: str) -> None:
"""
Takes a message from the client and attempts to parse it.
If a parsing error occurs, it is returned to the client. If the :exc:`HelpRequested` exception was raised by the
:class:`ControlParser`, nothing else happens. Otherwise, the appropriate `_exec...` method is called with the
entire dictionary of keyword-arguments returned by the :class:`ControlParser` passed into it.
Args:
msg: The non-empty string read from the client stream.
"""
try:
kwargs = vars(self._parser.parse_args(msg.split(' ')))
except ArgumentError as e:
log.debug("%s got an ArgumentError", self._client_class_name)
self._writer.write(str(e).encode())
return
except (HelpRequested, ParserError):
log.debug("%s received usage help", self._client_class_name)
return
command = kwargs.pop(CMD)
if isfunction(command):
await self._exec_method_and_respond(command, **kwargs)
elif isinstance(command, property):
await self._exec_property_and_respond(command, **kwargs)
else:
self._writer.write(str(CommandError(f"Unknown command object: {command}")).encode())
async def listen(self) -> None:
"""
Enters the main control loop listening to client input.
This method only returns if either the server or the client disconnect.
Messages from the client are read, parsed, and turned into pool commands (if possible).
This method should be called, when the client connection was established and the handshake was successful.
It will obviously block indefinitely.
"""
while self._control_server.is_serving():
msg = (await self._reader.read(SESSION_MSG_BYTES)).decode().strip()
if not msg:
log.debug("%s disconnected", self._client_class_name)
break
await self._parse_command(msg)
await self._writer.drain()

View File

@ -63,13 +63,13 @@ class ServerException(Exception):
pass pass
class UnknownTaskPoolClass(ServerException):
pass
class NotATaskPool(ServerException):
pass
class HelpRequested(ServerException): class HelpRequested(ServerException):
pass pass
class ParserError(ServerException):
pass
class CommandError(ServerException):
pass

View File

@ -1,69 +0,0 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Miscellaneous helper functions.
"""
from asyncio.coroutines import iscoroutinefunction
from asyncio.queues import Queue
from inspect import getdoc
from typing import Any, Optional, Union
from .types import T, AnyCallableT, ArgsT, KwArgsT
async def execute_optional(function: AnyCallableT, args: ArgsT = (), kwargs: KwArgsT = None) -> Optional[T]:
if not callable(function):
return
if kwargs is None:
kwargs = {}
if iscoroutinefunction(function):
return await function(*args, **kwargs)
return function(*args, **kwargs)
def star_function(function: AnyCallableT, arg: Any, arg_stars: int = 0) -> T:
if arg_stars == 0:
return function(arg)
if arg_stars == 1:
return function(*arg)
if arg_stars == 2:
return function(**arg)
raise ValueError(f"Invalid argument arg_stars={arg_stars}; must be 0, 1, or 2.")
async def join_queue(q: Queue) -> None:
await q.join()
def tasks_str(num: int) -> str:
return "tasks" if num != 1 else "task"
def get_first_doc_line(obj: object) -> str:
return getdoc(obj).strip().split("\n", 1)[0].strip()
async def return_or_exception(_function_to_execute: AnyCallableT, *args, **kwargs) -> Union[T, Exception]:
try:
if iscoroutinefunction(_function_to_execute):
return await _function_to_execute(*args, **kwargs)
else:
return _function_to_execute(*args, **kwargs)
except Exception as e:
return e

View File

@ -16,32 +16,24 @@ If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """ __doc__ = """
Constants used by more than one module in the package. Constants used by more than one module in the package.
This module should **not** be considered part of the public API.
""" """
PACKAGE_NAME = 'asyncio_taskpool' PACKAGE_NAME = 'asyncio_taskpool'
DEFAULT_TASK_GROUP = '' DEFAULT_TASK_GROUP = 'default'
DATETIME_FORMAT = '%Y-%m-%d_%H-%M-%S' DATETIME_FORMAT = '%Y-%m-%d_%H-%M-%S'
CLIENT_EXIT = 'exit'
SESSION_MSG_BYTES = 1024 * 100 SESSION_MSG_BYTES = 1024 * 100
SESSION_WRITER = 'session_writer'
STREAM_WRITER = 'stream_writer'
CMD = 'command'
CMD_OK = b"ok"
class CLIENT_INFO: class CLIENT_INFO:
__slots__ = () __slots__ = ()
TERMINAL_WIDTH = 'terminal_width' TERMINAL_WIDTH = 'terminal_width'
class CMD:
__slots__ = ()
CMD = 'command'
NAME = 'name'
POOL_SIZE = 'pool-size'
NUM_RUNNING = 'num-running'
START = 'start'
STOP = 'stop'
STOP_ALL = 'stop-all'
FUNC_NAME = 'func-name'

View File

@ -15,7 +15,9 @@ You should have received a copy of the GNU Lesser General Public License along w
If not, see <https://www.gnu.org/licenses/>.""" If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """ __doc__ = """
This module contains the definition of the `TaskGroupRegister` class. Definition of :class:`TaskGroupRegister`.
It should not be considered part of the public API.
""" """
@ -26,9 +28,9 @@ from typing import Iterator, Set
class TaskGroupRegister(MutableSet): class TaskGroupRegister(MutableSet):
""" """
This class combines the interface of a regular `set` with that of the `asyncio.Lock`. Combines the interface of a regular `set` with that of the `asyncio.Lock`.
It serves simultaneously as a container of IDs of tasks that belong to the same group, and as a mechanism for Serves simultaneously as a container of IDs of tasks that belong to the same group, and as a mechanism for
preventing race conditions within a task group. The lock should be acquired before cancelling the entire group of preventing race conditions within a task group. The lock should be acquired before cancelling the entire group of
tasks, as well as before starting a task within the group. tasks, as well as before starting a task within the group.
""" """

View File

@ -0,0 +1,139 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Miscellaneous helper functions. None of these should be considered part of the public API.
"""
from asyncio.coroutines import iscoroutinefunction
from asyncio.queues import Queue
from importlib import import_module
from inspect import getdoc
from typing import Any, Optional, Union
from .types import T, AnyCallableT, ArgsT, KwArgsT
async def execute_optional(function: AnyCallableT, args: ArgsT = (), kwargs: KwArgsT = None) -> Optional[T]:
"""
Runs `function` with `args` and `kwargs` and returns its output.
Args:
function:
Any callable that accepts the provided positional and keyword-arguments.
If it is a coroutine function, it will be awaited.
If it is not a callable, nothing is returned.
*args (optional):
Positional arguments to pass to `function`.
**kwargs (optional):
Keyword-arguments to pass to `function`.
Returns:
Whatever `function` returns (possibly after being awaited) or `None` if `function` is not callable.
"""
if not callable(function):
return
if kwargs is None:
kwargs = {}
if iscoroutinefunction(function):
return await function(*args, **kwargs)
return function(*args, **kwargs)
def star_function(function: AnyCallableT, arg: Any, arg_stars: int = 0) -> T:
"""
Calls `function` passing `arg` to it, optionally unpacking it first.
Args:
function:
Any callable that accepts the provided argument(s).
arg:
The single positional argument that `function` expects; in this case `arg_stars` should be 0.
Or the iterable of positional arguments that `function` expects; in this case `arg_stars` should be 1.
Or the mapping of keyword-arguments that `function` expects; in this case `arg_stars` should be 2.
arg_stars (optional):
Determines if and how to unpack `arg`.
0 means no unpacking, i.e. `arg` is passed into `function` directly as `function(arg)`.
1 means unpacking to an arbitrary number of positional arguments, i.e. as `function(*arg)`.
2 means unpacking to an arbitrary number of keyword-arguments, i.e. as `function(**arg)`.
Returns:
Whatever `function` returns.
Raises:
`ValueError`: `arg_stars` is something other than 0, 1, or 2.
"""
if arg_stars == 0:
return function(arg)
if arg_stars == 1:
return function(*arg)
if arg_stars == 2:
return function(**arg)
raise ValueError(f"Invalid argument arg_stars={arg_stars}; must be 0, 1, or 2.")
async def join_queue(q: Queue) -> None:
"""Wrapper function around the join method of an `asyncio.Queue` instance."""
await q.join()
def get_first_doc_line(obj: object) -> str:
"""Takes an object and returns the first (non-empty) line of its docstring."""
return getdoc(obj).strip().split("\n", 1)[0].strip()
async def return_or_exception(_function_to_execute: AnyCallableT, *args, **kwargs) -> Union[T, Exception]:
"""
Returns the output of a function or the exception thrown during its execution.
Args:
_function_to_execute:
Any callable that accepts the provided positional and keyword-arguments.
*args (optional):
Positional arguments to pass to `_function_to_execute`.
**kwargs (optional):
Keyword-arguments to pass to `_function_to_execute`.
Returns:
Whatever `_function_to_execute` returns or throws. (An exception is not raised, but returned!)
"""
try:
if iscoroutinefunction(_function_to_execute):
return await _function_to_execute(*args, **kwargs)
else:
return _function_to_execute(*args, **kwargs)
except Exception as e:
return e
def resolve_dotted_path(dotted_path: str) -> object:
"""
Resolves a dotted path to a global object and returns that object.
Algorithm shamelessly stolen from the `logging.config` module from the standard library.
"""
names = dotted_path.split('.')
module_name = names.pop(0)
found = import_module(module_name)
for name in names:
try:
found = getattr(found, name)
except AttributeError:
module_name += f'.{name}'
import_module(module_name)
found = getattr(found, name)
return found

View File

@ -16,6 +16,8 @@ If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """ __doc__ = """
Custom type definitions used in various modules. Custom type definitions used in various modules.
This module should **not** be considered part of the public API.
""" """

View File

@ -15,19 +15,15 @@ You should have received a copy of the GNU Lesser General Public License along w
If not, see <https://www.gnu.org/licenses/>.""" If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """ __doc__ = """
This module contains the definitions of the task pool classes. Definitions of the task pool classes.
A task pool is an object with a simple interface for aggregating and dynamically managing asynchronous tasks. The :class:`BaseTaskPool` is a parent class and not intended for direct use.
Generally speaking, a task is added to a pool by providing it with a coroutine function reference as well as the The :class:`TaskPool` and :class:`SimpleTaskPool` are subclasses intended for direct use.
arguments for that function.
The `BaseTaskPool` class is a parent class and not intended for direct use.
The `TaskPool` and `SimpleTaskPool` are subclasses intended for direct use.
While the former allows for heterogeneous collections of tasks that can be entirely unrelated to one another, the While the former allows for heterogeneous collections of tasks that can be entirely unrelated to one another, the
latter requires a preemptive decision about the function **and** its arguments upon initialization and only allows latter requires a preemptive decision about the function **and** its arguments upon initialization and only allows
to dynamically control the **number** of tasks running at any point in time. to dynamically control the **number** of tasks running at any point in time.
For further details about the classes check their respective docstrings. For further details about the classes check their respective documentation.
""" """
@ -40,14 +36,22 @@ from asyncio.tasks import Task, create_task, gather
from contextlib import suppress from contextlib import suppress
from datetime import datetime from datetime import datetime
from math import inf from math import inf
from typing import Any, Awaitable, Dict, Iterable, Iterator, List, Set from typing import Any, Awaitable, Dict, Iterable, Iterator, List, Set, Union
from . import exceptions from . import exceptions
from .constants import DEFAULT_TASK_GROUP, DATETIME_FORMAT
from .group_register import TaskGroupRegister
from .helpers import execute_optional, star_function, join_queue
from .queue_context import Queue from .queue_context import Queue
from .types import ArgsT, KwArgsT, CoroutineFunc, EndCB, CancelCB from .internals.constants import DEFAULT_TASK_GROUP, DATETIME_FORMAT
from .internals.group_register import TaskGroupRegister
from .internals.helpers import execute_optional, star_function, join_queue
from .internals.types import ArgsT, KwArgsT, CoroutineFunc, EndCB, CancelCB
__all__ = [
'BaseTaskPool',
'TaskPool',
'SimpleTaskPool',
'AnyTaskPoolT'
]
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -59,16 +63,14 @@ class BaseTaskPool:
@classmethod @classmethod
def _add_pool(cls, pool: 'BaseTaskPool') -> int: def _add_pool(cls, pool: 'BaseTaskPool') -> int:
"""Adds a `pool` (instance of any subclass) to the general list of pools and returns it's index in the list.""" """Adds a `pool` to the general list of pools and returns it's index."""
cls._pools.append(pool) cls._pools.append(pool)
return len(cls._pools) - 1 return len(cls._pools) - 1
def __init__(self, pool_size: int = inf, name: str = None) -> None: def __init__(self, pool_size: int = inf, name: str = None) -> None:
"""Initializes the necessary internal attributes and adds the new pool to the general pools list.""" """Initializes the necessary internal attributes and adds the new pool to the general pools list."""
# Initialize a counter for the total number of tasks started through the pool and one for the total number of # Initialize a counter for the total number of tasks started through the pool.
# tasks cancelled through the pool.
self._num_started: int = 0 self._num_started: int = 0
self._num_cancellations: int = 0
# Initialize flags; immutably set the name. # Initialize flags; immutably set the name.
self._locked: bool = False self._locked: bool = False
@ -97,30 +99,29 @@ class BaseTaskPool:
@property @property
def pool_size(self) -> int: def pool_size(self) -> int:
"""Returns the maximum number of concurrently running tasks currently set in the pool.""" """Maximum number of concurrently running tasks allowed in the pool."""
return self._pool_size return getattr(self._enough_room, '_value')
@pool_size.setter @pool_size.setter
def pool_size(self, value: int) -> None: def pool_size(self, value: int) -> None:
""" """
Sets the maximum number of concurrently running tasks in the pool. Sets the maximum number of concurrently running tasks in the pool.
Args:
value:
A non-negative integer.
NOTE: Increasing the pool size will immediately start tasks that are awaiting enough room to run. NOTE: Increasing the pool size will immediately start tasks that are awaiting enough room to run.
Args:
value: A non-negative integer.
Raises: Raises:
`ValueError` if `value` is less than 0. `ValueError`: `value` is less than 0.
""" """
if value < 0: if value < 0:
raise ValueError("Pool size can not be less than 0") raise ValueError("Pool size can not be less than 0")
self._enough_room._value = value self._enough_room._value = value
self._pool_size = value
@property @property
def is_locked(self) -> bool: def is_locked(self) -> bool:
"""Returns `True` if more the pool has been locked (see below).""" """`True` if the pool has been locked (see below)."""
return self._locked return self._locked
def lock(self) -> None: def lock(self) -> None:
@ -138,26 +139,26 @@ class BaseTaskPool:
@property @property
def num_running(self) -> int: def num_running(self) -> int:
""" """
Returns the number of tasks in the pool that are (at that moment) still running. Number of tasks in the pool that are still running.
At the moment a task's `end_callback` or `cancel_callback` is fired, it is no longer considered running. At the moment a task's `end_callback` or `cancel_callback` is fired, it is no longer considered running.
""" """
return len(self._tasks_running) return len(self._tasks_running)
@property @property
def num_cancellations(self) -> int: def num_cancelled(self) -> int:
""" """
Returns the number of tasks in the pool that have been cancelled through the pool (up until that moment). Number of tasks in the pool that have been cancelled.
At the moment a task's `cancel_callback` is fired, this counts as a cancellation, and the task is then At the moment a task's `cancel_callback` is fired, it is considered to be cancelled and no longer running,
considered cancelled (instead of running) until its `end_callback` is fired. until its `end_callback` is fired, at which point it is considered ended (instead of cancelled).
""" """
return self._num_cancellations return len(self._tasks_cancelled)
@property @property
def num_ended(self) -> int: def num_ended(self) -> int:
""" """
Returns the number of tasks started through the pool that have stopped running (up until that moment). Number of tasks in the pool that have stopped running.
At the moment a task's `end_callback` is fired, it is considered ended and no longer running (or cancelled). At the moment a task's `end_callback` is fired, it is considered ended and no longer running (or cancelled).
When a task is cancelled, it is not immediately considered ended; only after its `cancel_callback` has returned, When a task is cancelled, it is not immediately considered ended; only after its `cancel_callback` has returned,
@ -165,36 +166,35 @@ class BaseTaskPool:
""" """
return len(self._tasks_ended) return len(self._tasks_ended)
@property
def num_finished(self) -> int:
"""Returns the number of tasks in the pool that have finished running (without having been cancelled)."""
return len(self._tasks_ended) - self._num_cancellations + len(self._tasks_cancelled)
@property @property
def is_full(self) -> bool: def is_full(self) -> bool:
""" """
Returns `False` only if (at that moment) the number of running tasks is below the pool's specified size. `False` if the number of running tasks is less than the pool size.
When the pool is full, any call to start a new task within it will block.
When the pool is full, any call to start a new task within it will block, until there is enough room for it.
""" """
return self._enough_room.locked() return self._enough_room.locked()
def get_task_group_ids(self, group_name: str) -> Set[int]: def get_group_ids(self, *group_names: str) -> Set[int]:
""" """
Returns the set of IDs of all tasks in the specified group. Returns the set of IDs of all tasks in the specified groups.
Args: Args:
group_name: Must be a name of a task group that exists within the pool. *group_names: Each element must be a name of a task group that exists within the pool.
Returns: Returns:
Set of integers representing the task IDs belonging to the specified group. Set of integers representing the task IDs belonging to the specified groups.
Raises: Raises:
`InvalidGroupName` if no task group named `group_name` exists in the pool. `InvalidGroupName`: One of the specified`group_names` does not exist in the pool.
""" """
ids = set()
for name in group_names:
try: try:
return set(self._task_groups[group_name]) ids.update(self._task_groups[name])
except KeyError: except KeyError:
raise exceptions.InvalidGroupName(f"No task group named {group_name} exists in this pool.") raise exceptions.InvalidGroupName(f"No task group named {name} exists in this pool.")
return ids
def _check_start(self, *, awaitable: Awaitable = None, function: CoroutineFunc = None, def _check_start(self, *, awaitable: Awaitable = None, function: CoroutineFunc = None,
ignore_lock: bool = False) -> None: ignore_lock: bool = False) -> None:
@ -210,10 +210,10 @@ class BaseTaskPool:
ignore_lock (optional): If `True`, a locked pool will produce no error here. ignore_lock (optional): If `True`, a locked pool will produce no error here.
Raises: Raises:
`AssertionError` if both or neither of `awaitable` and `function` were passed. `AssertionError`: Both or neither of `awaitable` and `function` were passed.
`asyncio_taskpool.exceptions.PoolIsClosed` if the pool is closed. `asyncio_taskpool.exceptions.PoolIsClosed`: The pool is closed.
`asyncio_taskpool.exceptions.NotCoroutine` if `awaitable` is not a cor. / `function` not a cor. func. `asyncio_taskpool.exceptions.NotCoroutine`: `awaitable` is not a cor. / `function` not a cor. func.
`asyncio_taskpool.exceptions.PoolIsLocked` if the pool has been locked and `ignore_lock` is `False`. `asyncio_taskpool.exceptions.PoolIsLocked`: The pool has been locked and `ignore_lock` is `False`.
""" """
assert (awaitable is None) != (function is None) assert (awaitable is None) != (function is None)
if awaitable and not iscoroutine(awaitable): if awaitable and not iscoroutine(awaitable):
@ -244,7 +244,6 @@ class BaseTaskPool:
""" """
log.debug("Cancelling %s ...", self._task_name(task_id)) log.debug("Cancelling %s ...", self._task_name(task_id))
self._tasks_cancelled[task_id] = self._tasks_running.pop(task_id) self._tasks_cancelled[task_id] = self._tasks_running.pop(task_id)
self._num_cancellations += 1
log.debug("Cancelled %s", self._task_name(task_id)) log.debug("Cancelled %s", self._task_name(task_id))
await execute_optional(custom_callback, args=(task_id,)) await execute_optional(custom_callback, args=(task_id,))
@ -273,7 +272,9 @@ class BaseTaskPool:
async def _task_wrapper(self, awaitable: Awaitable, task_id: int, end_callback: EndCB = None, async def _task_wrapper(self, awaitable: Awaitable, task_id: int, end_callback: EndCB = None,
cancel_callback: CancelCB = None) -> Any: cancel_callback: CancelCB = None) -> Any:
""" """
Universal wrapper around every task run in the pool that returns/raises whatever the wrapped coroutine does. Universal wrapper around every task run in the pool.
Returns/raises whatever the wrapped coroutine does.
Responsible for catching cancellation and awaiting the `_task_cancellation` callback, as well as for awaiting Responsible for catching cancellation and awaiting the `_task_cancellation` callback, as well as for awaiting
the `_task_ending` callback, after the coroutine returns or raises an exception. the `_task_ending` callback, after the coroutine returns or raises an exception.
@ -303,8 +304,8 @@ class BaseTaskPool:
""" """
Starts a coroutine as a new task in the pool. Starts a coroutine as a new task in the pool.
This method can block for a significant amount of time, **only if** the pool is full. This method can block for a significant amount of time, **only if** the pool is full. Otherwise it merely needs
Otherwise it merely needs to acquire the `TaskGroupRegister` lock, which should never be held for a long time. to acquire the :class:`TaskGroupRegister` lock, which should never be held for a long time.
Args: Args:
awaitable: awaitable:
@ -344,9 +345,9 @@ class BaseTaskPool:
task_id: The ID of a task still running within the pool. task_id: The ID of a task still running within the pool.
Raises: Raises:
`asyncio_taskpool.exceptions.AlreadyCancelled` if the task with `task_id` has been (recently) cancelled. `asyncio_taskpool.exceptions.AlreadyCancelled`: The task with `task_id` has been (recently) cancelled.
`asyncio_taskpool.exceptions.AlreadyEnded` if the task with `task_id` has ended (recently). `asyncio_taskpool.exceptions.AlreadyEnded`: The task with `task_id` has ended (recently).
`asyncio_taskpool.exceptions.InvalidTaskID` if no task with `task_id` is known to the pool. `asyncio_taskpool.exceptions.InvalidTaskID`: No task with `task_id` is known to the pool.
""" """
try: try:
return self._tasks_running[task_id] return self._tasks_running[task_id]
@ -361,16 +362,18 @@ class BaseTaskPool:
""" """
Cancels the tasks with the specified IDs. Cancels the tasks with the specified IDs.
Each task ID must belong to a task still running within the pool. Otherwise one of the following exceptions will Each task ID must belong to a task still running within the pool.
be raised:
- `AlreadyCancelled` if one of the `task_ids` belongs to a task that has been (recently) cancelled.
- `AlreadyEnded` if one of the `task_ids` belongs to a task that has ended (recently).
- `InvalidTaskID` if any of the `task_ids` is not known to the pool.
Note that once a pool has been flushed (see below), IDs of tasks that have ended previously will be forgotten. Note that once a pool has been flushed (see below), IDs of tasks that have ended previously will be forgotten.
Args: Args:
task_ids: Arbitrary number of integers. Each must be an ID of a task still running within the pool. *task_ids: Arbitrary number of integers. Each must be an ID of a task still running within the pool.
msg (optional): Passed to the `Task.cancel()` method of every task specified by the `task_ids`. msg (optional): Passed to the `Task.cancel()` method of every task specified by the `task_ids`.
Raises:
`AlreadyCancelled`: One of the `task_ids` belongs to a task that has been (recently) cancelled.
`AlreadyEnded`: One of the `task_ids` belongs to a task that has ended (recently).
`InvalidTaskID`: One of the `task_ids` is not known to the pool.
""" """
tasks = [self._get_running_task(task_id) for task_id in task_ids] tasks = [self._get_running_task(task_id) for task_id in task_ids]
for task in tasks: for task in tasks:
@ -378,7 +381,9 @@ class BaseTaskPool:
def _cancel_and_remove_all_from_group(self, group_name: str, group_reg: TaskGroupRegister, msg: str = None) -> None: def _cancel_and_remove_all_from_group(self, group_name: str, group_reg: TaskGroupRegister, msg: str = None) -> None:
""" """
Removes all tasks from the specified group and cancels them, if they are still running. Removes all tasks from the specified group and cancels them.
Does nothing to tasks, that are no longer running.
Args: Args:
group_name: The name of the group of tasks that shall be cancelled. group_name: The name of the group of tasks that shall be cancelled.
@ -394,14 +399,16 @@ class BaseTaskPool:
async def cancel_group(self, group_name: str, msg: str = None) -> None: async def cancel_group(self, group_name: str, msg: str = None) -> None:
""" """
Cancels an entire group of tasks. The task group is subsequently forgotten by the pool. Cancels an entire group of tasks.
The task group is subsequently forgotten by the pool.
Args: Args:
group_name: The name of the group of tasks that shall be cancelled. group_name: The name of the group of tasks that shall be cancelled.
msg (optional): Passed to the `Task.cancel()` method of every task specified by the `task_ids`. msg (optional): Passed to the `Task.cancel()` method of every task specified by the `task_ids`.
Raises: Raises:
`InvalidGroupName` if no task group named `group_name` exists in the pool. `InvalidGroupName`: if no task group named `group_name` exists in the pool.
""" """
log.debug("%s cancelling tasks in group %s", str(self), group_name) log.debug("%s cancelling tasks in group %s", str(self), group_name)
try: try:
@ -427,12 +434,13 @@ class BaseTaskPool:
async def flush(self, return_exceptions: bool = False): async def flush(self, return_exceptions: bool = False):
""" """
Calls `asyncio.gather` on all ended/cancelled tasks from the pool, and forgets the tasks. Gathers (i.e. awaits) all ended/cancelled tasks in the pool.
This method exists mainly to free up memory of unneeded `Task` objects. The tasks are subsequently forgotten by the pool. This method exists mainly to free up memory of unneeded
`Task` objects.
This method blocks, **only if** any of the tasks block while catching a `asyncio.CancelledError` or any of the It blocks, **only if** any of the tasks block while catching a `asyncio.CancelledError` or any of the callbacks
callbacks registered for the tasks block. registered for the tasks block.
Args: Args:
return_exceptions (optional): Passed directly into `gather`. return_exceptions (optional): Passed directly into `gather`.
@ -443,9 +451,11 @@ class BaseTaskPool:
async def gather_and_close(self, return_exceptions: bool = False): async def gather_and_close(self, return_exceptions: bool = False):
""" """
Calls `asyncio.gather` on **all** tasks in the pool, then permanently closes the pool. Gathers (i.e. awaits) **all** tasks in the pool, then closes it.
The `lock()` method must have been called prior to this. After this method returns, no more tasks can be started in the pool.
:meth:`lock` must have been called prior to this.
This method may block, if one of the tasks blocks while catching a `asyncio.CancelledError` or if any of the This method may block, if one of the tasks blocks while catching a `asyncio.CancelledError` or if any of the
callbacks registered for a task blocks for whatever reason. callbacks registered for a task blocks for whatever reason.
@ -454,7 +464,7 @@ class BaseTaskPool:
return_exceptions (optional): Passed directly into `gather`. return_exceptions (optional): Passed directly into `gather`.
Raises: Raises:
`PoolStillUnlocked` if the pool has not been locked yet. `PoolStillUnlocked`: The pool has not been locked yet.
""" """
if not self._locked: if not self._locked:
raise exceptions.PoolStillUnlocked("Pool must be locked, before tasks can be gathered") raise exceptions.PoolStillUnlocked("Pool must be locked, before tasks can be gathered")
@ -470,7 +480,9 @@ class BaseTaskPool:
class TaskPool(BaseTaskPool): class TaskPool(BaseTaskPool):
""" """
General task pool class. Attempts to emulate part of the interface of `multiprocessing.pool.Pool` from the stdlib. General purpose task pool class.
Attempts to emulate part of the interface of `multiprocessing.pool.Pool` from the stdlib.
A `TaskPool` instance can manage an arbitrary number of concurrent tasks from any coroutine function. A `TaskPool` instance can manage an arbitrary number of concurrent tasks from any coroutine function.
Tasks in the pool can all belong to the same coroutine function, Tasks in the pool can all belong to the same coroutine function,
@ -503,16 +515,19 @@ class TaskPool(BaseTaskPool):
log.debug("%s cancelled and forgot meta tasks from group %s", str(self), group_name) log.debug("%s cancelled and forgot meta tasks from group %s", str(self), group_name)
def _cancel_and_remove_all_from_group(self, group_name: str, group_reg: TaskGroupRegister, msg: str = None) -> None: def _cancel_and_remove_all_from_group(self, group_name: str, group_reg: TaskGroupRegister, msg: str = None) -> None:
"""See base class."""
self._cancel_group_meta_tasks(group_name) self._cancel_group_meta_tasks(group_name)
super()._cancel_and_remove_all_from_group(group_name, group_reg, msg=msg) super()._cancel_and_remove_all_from_group(group_name, group_reg, msg=msg)
async def cancel_group(self, group_name: str, msg: str = None) -> None: async def cancel_group(self, group_name: str, msg: str = None) -> None:
""" """
Cancels an entire group of tasks. The task group is subsequently forgotten by the pool. Cancels an entire group of tasks.
If any methods such as `map()` launched meta tasks belonging to that group, these meta tasks are cancelled The task group is subsequently forgotten by the pool.
If any methods such as :meth:`map` launched meta tasks belonging to that group, these meta tasks are cancelled
before the actual tasks are cancelled. This means that any tasks "queued" to be started by a meta task will before the actual tasks are cancelled. This means that any tasks "queued" to be started by a meta task will
**never even start**. In the case of `map()` this would mean that the `arg_iter` may be abandoned before it **never even start**. In the case of :meth:`map` this would mean that its `arg_iter` may be abandoned before it
was fully consumed (if that is even possible). was fully consumed (if that is even possible).
Args: Args:
@ -520,18 +535,18 @@ class TaskPool(BaseTaskPool):
msg (optional): Passed to the `Task.cancel()` method of every task specified by the `task_ids`. msg (optional): Passed to the `Task.cancel()` method of every task specified by the `task_ids`.
Raises: Raises:
`InvalidGroupName` if no task group named `group_name` exists in the pool. `InvalidGroupName`: No task group named `group_name` exists in the pool.
""" """
await super().cancel_group(group_name=group_name, msg=msg) await super().cancel_group(group_name=group_name, msg=msg)
async def cancel_all(self, msg: str = None) -> None: async def cancel_all(self, msg: str = None) -> None:
""" """
Cancels all tasks still running within the pool. (This includes all meta tasks.) Cancels all tasks still running within the pool (including meta tasks).
If any methods such as `map()` launched meta tasks, these meta tasks are cancelled before the actual tasks are If any methods such as :meth:`map` launched meta tasks, these meta tasks are cancelled before the actual tasks
cancelled. This means that any tasks "queued" to be started by a meta task will **never even start**. In the are cancelled. This means that any tasks "queued" to be started by a meta task will **never even start**. In the
case of `map()` this would mean that the `arg_iter` may be abandoned before it was fully consumed (if that is case of :meth:`map` this would mean that its `arg_iter` may be abandoned before it was fully consumed (if that
even possible). is even possible).
Args: Args:
msg (optional): Passed to the `Task.cancel()` method of every task specified by the `task_ids`. msg (optional): Passed to the `Task.cancel()` method of every task specified by the `task_ids`.
@ -566,12 +581,13 @@ class TaskPool(BaseTaskPool):
async def flush(self, return_exceptions: bool = False): async def flush(self, return_exceptions: bool = False):
""" """
Calls `asyncio.gather` on all ended/cancelled tasks from the pool, and forgets the tasks. Gathers (i.e. awaits) all ended/cancelled tasks in the pool.
This method exists mainly to free up memory of unneeded `Task` objects. It also gets rid of unneeded meta tasks. The tasks are subsequently forgotten by the pool. This method exists mainly to free up memory of unneeded
`Task` objects. It also gets rid of unneeded meta tasks.
This method blocks, **only if** any of the tasks block while catching a `asyncio.CancelledError` or any of the It blocks, **only if** any of the tasks block while catching a `asyncio.CancelledError` or any of the callbacks
callbacks registered for the tasks block. registered for the tasks block.
Args: Args:
return_exceptions (optional): Passed directly into `gather`. return_exceptions (optional): Passed directly into `gather`.
@ -584,15 +600,16 @@ class TaskPool(BaseTaskPool):
async def gather_and_close(self, return_exceptions: bool = False): async def gather_and_close(self, return_exceptions: bool = False):
""" """
Calls `asyncio.gather` on **all** tasks in the pool, then permanently closes the pool. Gathers (i.e. awaits) **all** tasks in the pool, then closes it.
After this method returns, no more tasks can be started in the pool.
The `lock()` method must have been called prior to this. The `lock()` method must have been called prior to this.
Note that this method may block indefinitely as long as any task in the pool is not done. This includes meta Note that this method may block indefinitely as long as any task in the pool is not done. This includes meta
tasks launched my methods such as `map()`, which ends by itself, only once the `arg_iter` is fully consumed, tasks launched by methods such as :meth:`map`, which ends by itself, only once its `arg_iter` is fully consumed,
which may not even be possible (depending on what the iterable of arguments represents). If you want to avoid which may not even be possible (depending on what the iterable of arguments represents). If you want to avoid
this, make sure to call `cancel_all()` prior to this. this, make sure to call :meth:`cancel_all` prior to this.
This method may also block, if one of the tasks blocks while catching a `asyncio.CancelledError` or if any of This method may also block, if one of the tasks blocks while catching a `asyncio.CancelledError` or if any of
the callbacks registered for a task blocks for whatever reason. the callbacks registered for a task blocks for whatever reason.
@ -601,8 +618,9 @@ class TaskPool(BaseTaskPool):
return_exceptions (optional): Passed directly into `gather`. return_exceptions (optional): Passed directly into `gather`.
Raises: Raises:
`PoolStillUnlocked` if the pool has not been locked yet. `PoolStillUnlocked`: The pool has not been locked yet.
""" """
# TODO: It probably makes sense to put this superclass method call at the end (see TODO in `_map`).
await super().gather_and_close(return_exceptions=return_exceptions) await super().gather_and_close(return_exceptions=return_exceptions)
not_cancelled_meta_tasks = set() not_cancelled_meta_tasks = set()
while self._group_meta_tasks_running: while self._group_meta_tasks_running:
@ -659,9 +677,13 @@ class TaskPool(BaseTaskPool):
async def apply(self, func: CoroutineFunc, args: ArgsT = (), kwargs: KwArgsT = None, num: int = 1, async def apply(self, func: CoroutineFunc, args: ArgsT = (), kwargs: KwArgsT = None, num: int = 1,
group_name: str = None, end_callback: EndCB = None, cancel_callback: CancelCB = None) -> str: group_name: str = None, end_callback: EndCB = None, cancel_callback: CancelCB = None) -> str:
""" """
Creates an arbitrary number of coroutines with the supplied arguments and runs them as new tasks in the pool. Creates tasks with the supplied arguments to be run in the pool.
Each coroutine looks like `func(*args, **kwargs)`, meaning the `args` and `kwargs` are unpacked and passed
into `func` before creating each task, and this is done `num` times.
All the new tasks are added to the same task group.
Each coroutine looks like `func(*args, **kwargs)`. All the new tasks are added to the same task group.
This method blocks, **only if** the pool has not enough room to accommodate `num` new tasks. This method blocks, **only if** the pool has not enough room to accommodate `num` new tasks.
Args: Args:
@ -686,9 +708,9 @@ class TaskPool(BaseTaskPool):
The name of the task group that the newly spawned tasks have been added to. The name of the task group that the newly spawned tasks have been added to.
Raises: Raises:
`PoolIsClosed` if the pool is closed. `PoolIsClosed`: The pool is closed.
`NotCoroutine` if `func` is not a coroutine function. `NotCoroutine`: `func` is not a coroutine function.
`PoolIsLocked` if the pool has been locked. `PoolIsLocked`: The pool is currently locked.
""" """
self._check_start(function=func) self._check_start(function=func)
if group_name is None: if group_name is None:
@ -702,7 +724,7 @@ class TaskPool(BaseTaskPool):
@classmethod @classmethod
async def _queue_producer(cls, arg_queue: Queue, arg_iter: Iterator[Any], group_name: str) -> None: async def _queue_producer(cls, arg_queue: Queue, arg_iter: Iterator[Any], group_name: str) -> None:
""" """
Keeps the arguments queue from `_map()` full as long as the iterator has elements. Keeps the arguments queue from :meth:`_map` full as long as the iterator has elements.
Intended to be run as a meta task of a specific group. Intended to be run as a meta task of a specific group.
@ -729,7 +751,7 @@ class TaskPool(BaseTaskPool):
@staticmethod @staticmethod
def _get_map_end_callback(map_semaphore: Semaphore, actual_end_callback: EndCB) -> EndCB: def _get_map_end_callback(map_semaphore: Semaphore, actual_end_callback: EndCB) -> EndCB:
"""Returns a wrapped `end_callback` for each `_queue_consumer()` task that will release the `map_semaphore`.""" """Returns a wrapped `end_callback` for each :meth:`_queue_consumer` task that releases the `map_semaphore`."""
async def release_callback(task_id: int) -> None: async def release_callback(task_id: int) -> None:
map_semaphore.release() map_semaphore.release()
await execute_optional(actual_end_callback, args=(task_id,)) await execute_optional(actual_end_callback, args=(task_id,))
@ -738,7 +760,7 @@ class TaskPool(BaseTaskPool):
async def _queue_consumer(self, arg_queue: Queue, group_name: str, func: CoroutineFunc, arg_stars: int = 0, async def _queue_consumer(self, arg_queue: Queue, group_name: str, func: CoroutineFunc, arg_stars: int = 0,
end_callback: EndCB = None, cancel_callback: CancelCB = None) -> None: end_callback: EndCB = None, cancel_callback: CancelCB = None) -> None:
""" """
Consumes arguments from the queue from `_map()` and keeps a limited number of tasks working on them. Consumes arguments from the queue from :meth:`_map` and keeps a limited number of tasks working on them.
The queue's maximum size is taken as the limiting value of an internal semaphore, which must be acquired before The queue's maximum size is taken as the limiting value of an internal semaphore, which must be acquired before
a new task can be started, and which must be released when one of these tasks ends. a new task can be started, and which must be released when one of these tasks ends.
@ -749,7 +771,7 @@ class TaskPool(BaseTaskPool):
arg_queue: arg_queue:
The queue of function arguments to consume for starting a new task. The queue of function arguments to consume for starting a new task.
group_name: group_name:
Name of the associated task group; passed into the `_start_task()` method. Name of the associated task group; passed into :meth:`_start_task`.
func: func:
The coroutine function to use for spawning the new tasks within the task pool. The coroutine function to use for spawning the new tasks within the task pool.
arg_stars (optional): arg_stars (optional):
@ -761,26 +783,34 @@ class TaskPool(BaseTaskPool):
The callback that was specified to execute after cancellation of the task (and the next one). The callback that was specified to execute after cancellation of the task (and the next one).
It is run with the task's ID as its only positional argument. It is run with the task's ID as its only positional argument.
""" """
map_semaphore = Semaphore(arg_queue.maxsize) # value determined by `group_size` in `_map()` map_semaphore = Semaphore(arg_queue.maxsize) # value determined by `group_size` in :meth:`_map`
release_cb = self._get_map_end_callback(map_semaphore, actual_end_callback=end_callback) release_cb = self._get_map_end_callback(map_semaphore, actual_end_callback=end_callback)
while True: while True:
# The following line blocks **only if** the number of running tasks spawned by this method has reached the # The following line blocks **only if** the number of running tasks spawned by this method has reached the
# specified maximum as determined in the `_map()` method. # specified maximum as determined in :meth:`_map`.
await map_semaphore.acquire() await map_semaphore.acquire()
# We await the queue's `get()` coroutine and subsequently ensure that its `task_done()` method is called. # We await the queue's `get()` coroutine and subsequently ensure that its `task_done()` method is called.
async with arg_queue as next_arg: async with arg_queue as next_arg:
if next_arg is self._QUEUE_END_SENTINEL: if next_arg is self._QUEUE_END_SENTINEL:
# The `_queue_producer()` either reached the last argument or was cancelled. # The :meth:`_queue_producer` either reached the last argument or was cancelled.
return return
try:
await self._start_task(star_function(func, next_arg, arg_stars=arg_stars), group_name=group_name, await self._start_task(star_function(func, next_arg, arg_stars=arg_stars), group_name=group_name,
ignore_lock=True, end_callback=release_cb, cancel_callback=cancel_callback) ignore_lock=True, end_callback=release_cb, cancel_callback=cancel_callback)
except Exception as e:
# This means an exception occurred during task **creation**, meaning no task has been created.
# It does not imply an error within the task itself.
log.exception("%s occurred while trying to create task: %s(%s%s)",
str(e.__class__.__name__), func.__name__, '*' * arg_stars, str(next_arg))
map_semaphore.release()
async def _map(self, group_name: str, group_size: int, func: CoroutineFunc, arg_iter: ArgsT, arg_stars: int, async def _map(self, group_name: str, group_size: int, func: CoroutineFunc, arg_iter: ArgsT, arg_stars: int,
end_callback: EndCB = None, cancel_callback: CancelCB = None) -> None: end_callback: EndCB = None, cancel_callback: CancelCB = None) -> None:
""" """
Creates coroutines with arguments from the supplied iterable and runs them as new tasks in the pool. Creates tasks in the pool with arguments from the supplied iterable.
Each coroutine looks like `func(arg)`, `func(*arg)`, or `func(**arg)`, `arg` being taken from `arg_iter`. Each coroutine looks like `func(arg)`, `func(*arg)`, or `func(**arg)`, `arg` being taken from `arg_iter`.
All the new tasks are added to the same task group. All the new tasks are added to the same task group.
The `group_size` determines the maximum number of tasks spawned this way that shall be running concurrently at The `group_size` determines the maximum number of tasks spawned this way that shall be running concurrently at
@ -793,7 +823,7 @@ class TaskPool(BaseTaskPool):
Because this method delegates the spawning of the tasks to two meta tasks (a producer and a consumer of the Because this method delegates the spawning of the tasks to two meta tasks (a producer and a consumer of the
aforementioned queue), it **never blocks**. However, just because this method returns immediately, this does aforementioned queue), it **never blocks**. However, just because this method returns immediately, this does
not mean that any task was started or that any number of tasks will start soon, as this is solely determined by not mean that any task was started or that any number of tasks will start soon, as this is solely determined by
the `pool_size` and the `group_size`. the :attr:`BaseTaskPool.pool_size` and the `group_size`.
Args: Args:
group_name: group_name:
@ -814,8 +844,8 @@ class TaskPool(BaseTaskPool):
It is run with the task's ID as its only positional argument. It is run with the task's ID as its only positional argument.
Raises: Raises:
`ValueError` if `group_size` is less than 1. `ValueError`: `group_size` is less than 1.
`asyncio_taskpool.exceptions.InvalidGroupName` if a group named `group_name` exists in the pool. `asyncio_taskpool.exceptions.InvalidGroupName`: A group named `group_name` exists in the pool.
""" """
self._check_start(function=func) self._check_start(function=func)
if group_size < 1: if group_size < 1:
@ -827,6 +857,11 @@ class TaskPool(BaseTaskPool):
# Set up internal arguments queue. We limit its maximum size to enable lazy consumption of `arg_iter` by the # Set up internal arguments queue. We limit its maximum size to enable lazy consumption of `arg_iter` by the
# `_queue_producer()`; that way an argument # `_queue_producer()`; that way an argument
arg_queue = Queue(maxsize=group_size) arg_queue = Queue(maxsize=group_size)
# TODO: This is the wrong thing to await before gathering!
# Since the queue producer and consumer operate in separate tasks, it is possible that the consumer
# "finishes" the entire queue before the producer manages to put more items in it, thus returning
# the `join` call before the arguments iterator was fully consumed.
# Probably the queue producer task should be awaited before gathering instead.
self._before_gathering.append(join_queue(arg_queue)) self._before_gathering.append(join_queue(arg_queue))
meta_tasks = self._group_meta_tasks_running.setdefault(group_name, set()) meta_tasks = self._group_meta_tasks_running.setdefault(group_name, set())
# Start the producer and consumer meta tasks. # Start the producer and consumer meta tasks.
@ -837,10 +872,11 @@ class TaskPool(BaseTaskPool):
async def map(self, func: CoroutineFunc, arg_iter: ArgsT, group_size: int = 1, group_name: str = None, async def map(self, func: CoroutineFunc, arg_iter: ArgsT, group_size: int = 1, group_name: str = None,
end_callback: EndCB = None, cancel_callback: CancelCB = None) -> str: end_callback: EndCB = None, cancel_callback: CancelCB = None) -> str:
""" """
An asyncio-task-based equivalent of the `multiprocessing.pool.Pool.map` method. A task-based equivalent of the `multiprocessing.pool.Pool.map` method.
Creates coroutines with arguments from the supplied iterable and runs them as new tasks in the pool. Creates coroutines with arguments from the supplied iterable and runs them as new tasks in the pool.
Each coroutine looks like `func(arg)`, `arg` being an element taken from `arg_iter`. Each coroutine looks like `func(arg)`, `arg` being an element taken from `arg_iter`.
All the new tasks are added to the same task group. All the new tasks are added to the same task group.
The `group_size` determines the maximum number of tasks spawned this way that shall be running concurrently at The `group_size` determines the maximum number of tasks spawned this way that shall be running concurrently at
@ -853,7 +889,7 @@ class TaskPool(BaseTaskPool):
Because this method delegates the spawning of the tasks to two meta tasks (a producer and a consumer of the Because this method delegates the spawning of the tasks to two meta tasks (a producer and a consumer of the
aforementioned queue), it **never blocks**. However, just because this method returns immediately, this does aforementioned queue), it **never blocks**. However, just because this method returns immediately, this does
not mean that any task was started or that any number of tasks will start soon, as this is solely determined by not mean that any task was started or that any number of tasks will start soon, as this is solely determined by
the `pool_size` and the `group_size`. the :attr:`BaseTaskPool.pool_size` and the `group_size`.
Args: Args:
func: func:
@ -875,11 +911,11 @@ class TaskPool(BaseTaskPool):
The name of the task group that the newly spawned tasks will be added to. The name of the task group that the newly spawned tasks will be added to.
Raises: Raises:
`PoolIsClosed` if the pool is closed. `PoolIsClosed`: The pool is closed.
`NotCoroutine` if `func` is not a coroutine function. `NotCoroutine`: `func` is not a coroutine function.
`PoolIsLocked` if the pool has been locked. `PoolIsLocked`: The pool is currently locked.
`ValueError` if `group_size` is less than 1. `ValueError`: `group_size` is less than 1.
`InvalidGroupName` if a group named `group_name` exists in the pool. `InvalidGroupName`: A group named `group_name` exists in the pool.
""" """
if group_name is None: if group_name is None:
group_name = self._generate_group_name('map', func) group_name = self._generate_group_name('map', func)
@ -890,8 +926,8 @@ class TaskPool(BaseTaskPool):
async def starmap(self, func: CoroutineFunc, args_iter: Iterable[ArgsT], group_size: int = 1, async def starmap(self, func: CoroutineFunc, args_iter: Iterable[ArgsT], group_size: int = 1,
group_name: str = None, end_callback: EndCB = None, cancel_callback: CancelCB = None) -> str: group_name: str = None, end_callback: EndCB = None, cancel_callback: CancelCB = None) -> str:
""" """
Like `map()` except that the elements of `args_iter` are expected to be iterables themselves to be unpacked as Like :meth:`map` except that the elements of `args_iter` are expected to be iterables themselves to be unpacked
positional arguments to the function. as positional arguments to the function.
Each coroutine then looks like `func(*args)`, `args` being an element from `args_iter`. Each coroutine then looks like `func(*args)`, `args` being an element from `args_iter`.
""" """
if group_name is None: if group_name is None:
@ -904,8 +940,8 @@ class TaskPool(BaseTaskPool):
group_name: str = None, end_callback: EndCB = None, group_name: str = None, end_callback: EndCB = None,
cancel_callback: CancelCB = None) -> str: cancel_callback: CancelCB = None) -> str:
""" """
Like `map()` except that the elements of `kwargs_iter` are expected to be iterables themselves to be unpacked as Like :meth:`map` except that the elements of `kwargs_iter` are expected to be iterables themselves to be
keyword-arguments to the function. unpacked as keyword-arguments to the function.
Each coroutine then looks like `func(**kwargs)`, `kwargs` being an element from `kwargs_iter`. Each coroutine then looks like `func(**kwargs)`, `kwargs` being an element from `kwargs_iter`.
""" """
if group_name is None: if group_name is None:
@ -927,7 +963,7 @@ class SimpleTaskPool(BaseTaskPool):
As long as there is room in the pool, more tasks can be added. (By default, there is no pool size limit.) As long as there is room in the pool, more tasks can be added. (By default, there is no pool size limit.)
Each task started in the pool receives a unique ID, which can be used to cancel specific tasks at any moment. Each task started in the pool receives a unique ID, which can be used to cancel specific tasks at any moment.
However, since all tasks come from the same function-arguments-combination, the specificity of the `cancel()` method However, since all tasks come from the same function-arguments-combination, the specificity of the `cancel()` method
is probably unnecessary. Instead, a simpler `stop()` method is introduced. is probably unnecessary. Instead, a simpler :meth:`stop` method is introduced.
Adding tasks blocks **only if** the pool is full at that moment. Adding tasks blocks **only if** the pool is full at that moment.
""" """
@ -936,6 +972,7 @@ class SimpleTaskPool(BaseTaskPool):
end_callback: EndCB = None, cancel_callback: CancelCB = None, end_callback: EndCB = None, cancel_callback: CancelCB = None,
pool_size: int = inf, name: str = None) -> None: pool_size: int = inf, name: str = None) -> None:
""" """
Initializes all required attributes.
Args: Args:
func: func:
@ -954,6 +991,9 @@ class SimpleTaskPool(BaseTaskPool):
The maximum number of tasks allowed to run concurrently in the pool The maximum number of tasks allowed to run concurrently in the pool
name (optional): name (optional):
An optional name for the pool. An optional name for the pool.
Raises:
`NotCoroutine`: `func` is not a coroutine function.
""" """
if not iscoroutinefunction(func): if not iscoroutinefunction(func):
raise exceptions.NotCoroutine(f"Not a coroutine function: {func}") raise exceptions.NotCoroutine(f"Not a coroutine function: {func}")
@ -966,7 +1006,7 @@ class SimpleTaskPool(BaseTaskPool):
@property @property
def func_name(self) -> str: def func_name(self) -> str:
"""Returns the name of the coroutine function used in the pool.""" """Name of the coroutine function used in the pool."""
return self._func.__name__ return self._func.__name__
async def _start_one(self) -> int: async def _start_one(self) -> int:
@ -974,18 +1014,33 @@ class SimpleTaskPool(BaseTaskPool):
return await self._start_task(self._func(*self._args, **self._kwargs), return await self._start_task(self._func(*self._args, **self._kwargs),
end_callback=self._end_callback, cancel_callback=self._cancel_callback) end_callback=self._end_callback, cancel_callback=self._cancel_callback)
async def start(self, num: int = 1) -> List[int]: async def start(self, num: int) -> List[int]:
"""Starts `num` new tasks within the pool and returns their IDs as a list.""" """
Starts specified number of new tasks in the pool and returns their IDs.
This method may block if there is less room in the pool than the desired number of new tasks.
Args:
num: The number of new tasks to start.
Returns:
List of IDs of the new tasks that have been started (not necessarily in the order they were started).
"""
ids = await gather(*(self._start_one() for _ in range(num))) ids = await gather(*(self._start_one() for _ in range(num)))
assert isinstance(ids, list) # for PyCharm (see above to-do-item) assert isinstance(ids, list) # for PyCharm
return ids return ids
def stop(self, num: int = 1) -> List[int]: def stop(self, num: int) -> List[int]:
""" """
Cancels `num` running tasks within the pool and returns their IDs as a list. Cancels specified number of tasks in the pool and returns their IDs.
The tasks are canceled in LIFO order, meaning tasks started later will be stopped before those started earlier. The tasks are canceled in LIFO order, meaning tasks started later will be stopped before those started earlier.
If `num` is greater than or equal to the number of currently running tasks, naturally all tasks are cancelled.
Args:
num: The number of tasks to cancel; if `num` >= :attr:`BaseTaskPool.num_running`, all tasks are cancelled.
Returns:
List of IDs of the tasks that have been cancelled (in the order they were cancelled).
""" """
ids = [] ids = []
for i, task_id in enumerate(reversed(self._tasks_running)): for i, task_id in enumerate(reversed(self._tasks_running)):
@ -996,5 +1051,8 @@ class SimpleTaskPool(BaseTaskPool):
return ids return ids
def stop_all(self) -> List[int]: def stop_all(self) -> List[int]:
"""Cancels all running tasks and returns their IDs as a list.""" """Cancels all running tasks and returns their IDs."""
return self.stop(self.num_running) return self.stop(self.num_running)
AnyTaskPoolT = Union[TaskPool, SimpleTaskPool]

View File

@ -15,7 +15,7 @@ You should have received a copy of the GNU Lesser General Public License along w
If not, see <https://www.gnu.org/licenses/>.""" If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """ __doc__ = """
This module contains the definition of an `asyncio.Queue` subclass. Definition of an :code:`asyncio.Queue` subclass with some small additions.
""" """
@ -23,12 +23,20 @@ from asyncio.queues import Queue as _Queue
from typing import Any from typing import Any
__all__ = ['Queue']
class Queue(_Queue): class Queue(_Queue):
"""This just adds a little syntactic sugar to the `asyncio.Queue`.""" """
Adds a little syntactic sugar to the :code:`asyncio.Queue`.
Allows being used as an async context manager awaiting `get` upon entering the context and calling
:meth:`item_processed` upon exiting it.
"""
def item_processed(self) -> None: def item_processed(self) -> None:
""" """
Does exactly the same as `task_done()`. Does exactly the same as :meth:`asyncio.Queue.task_done`.
This method exists because `task_done` is an atrocious name for the method. It communicates the wrong thing, This method exists because `task_done` is an atrocious name for the method. It communicates the wrong thing,
invites confusion, and immensely reduces readability (in the context of this library). And readability counts. invites confusion, and immensely reduces readability (in the context of this library). And readability counts.
@ -39,7 +47,7 @@ class Queue(_Queue):
""" """
Implements an asynchronous context manager for the queue. Implements an asynchronous context manager for the queue.
Upon entering `get()` is awaited and subsequently whatever came out of the queue is returned. Upon entering :meth:`get` is awaited and subsequently whatever came out of the queue is returned.
It allows writing code this way: It allows writing code this way:
>>> queue = Queue() >>> queue = Queue()
>>> ... >>> ...
@ -52,7 +60,7 @@ class Queue(_Queue):
""" """
Implements an asynchronous context manager for the queue. Implements an asynchronous context manager for the queue.
Upon exiting `item_processed()` is called. This is why this context manager may not always be what you want, Upon exiting :meth:`item_processed` is called. This is why this context manager may not always be what you want,
but in some situations it makes the codes much cleaner. but in some situations it makes the code much cleaner.
""" """
self.item_processed() self.item_processed()

View File

@ -1,304 +0,0 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
This module contains the the definition of the control session class used by the control server.
"""
import logging
import json
from argparse import ArgumentError, HelpFormatter
from asyncio.streams import StreamReader, StreamWriter
from typing import Callable, Optional, Union, TYPE_CHECKING
from .constants import CMD, SESSION_WRITER, SESSION_MSG_BYTES, CLIENT_INFO
from .exceptions import HelpRequested, NotATaskPool, UnknownTaskPoolClass
from .helpers import get_first_doc_line, return_or_exception, tasks_str
from .pool import BaseTaskPool, TaskPool, SimpleTaskPool
from .session_parser import CommandParser, NUM
if TYPE_CHECKING:
from .server import ControlServer
log = logging.getLogger(__name__)
class ControlSession:
"""
This class defines the API for controlling a task pool instance from the outside.
The commands received from a connected client are translated into method calls on the task pool instance.
A subclass of the standard `argparse.ArgumentParser` is used to handle the input read from the stream.
"""
def __init__(self, server: 'ControlServer', reader: StreamReader, writer: StreamWriter) -> None:
"""
Instantiation should happen once a client connection to the control server has already been established.
For more convenient/efficient access, some of the server's properties are saved in separate attributes.
The argument parser is _not_ instantiated in the constructor. It requires a bit of client information during
initialization, which is obtained in the `client_handshake` method; only there is the parser fully configured.
Args:
server:
The instance of a `ControlServer` subclass starting the session.
reader:
The `asyncio.StreamReader` created when a client connected to the server.
writer:
The `asyncio.StreamWriter` created when a client connected to the server.
"""
self._control_server: 'ControlServer' = server
self._pool: Union[TaskPool, SimpleTaskPool] = server.pool
self._client_class_name = server.client_class_name
self._reader: StreamReader = reader
self._writer: StreamWriter = writer
self._parser: Optional[CommandParser] = None
self._subparsers = None
def _add_command(self, name: str, prog: str = None, short_help: str = None, long_help: str = None,
**kwargs) -> CommandParser:
"""
Convenience method for adding a subparser (i.e. another command) to the main `CommandParser` instance.
Will always pass the session's main `CommandParser` instance as the `parent` keyword-argument.
Args:
name:
The command name; passed directly into the `add_parser` method.
prog (optional):
Also passed into the `add_parser` method as the corresponding keyword-argument. By default, is set
equal to the `name` argument.
short_help (optional):
Passed into the `add_parser` method as the `help` keyword-argument, unless it is left empty and the
`long_help` argument is present; in that case the `long_help` argument is passed as `help`.
long_help (optional):
Passed into the `add_parser` method as the `description` keyword-argument, unless it is left empty and
the `short_help` argument is present; in that case the `short_help` argument is passed as `description`.
**kwargs (optional):
Any keyword-arguments to directly pass into the `add_parser` method.
Returns:
An instance of the `CommandParser` class representing the newly added control command.
"""
if prog is None:
prog = name
kwargs.setdefault('help', short_help or long_help)
kwargs.setdefault('description', long_help or short_help)
return self._subparsers.add_parser(name, prog=prog, parent=self._parser, **kwargs)
def _add_base_commands(self) -> None:
"""
Adds the commands that are supported regardless of the specific subclass of `BaseTaskPool` controlled.
These include commands mapping to the following pool methods:
- __str__
- pool_size (get/set property)
- num_running
"""
self._add_command(CMD.NAME, short_help=get_first_doc_line(self._pool.__class__.__str__))
self._add_command(
CMD.POOL_SIZE,
short_help="Get/set the maximum number of tasks in the pool.",
formatter_class=HelpFormatter
).add_optional_num_argument(
default=None,
help=f"If passed a number: {get_first_doc_line(self._pool.__class__.pool_size.fset)} "
f"If omitted: {get_first_doc_line(self._pool.__class__.pool_size.fget)}"
)
self._add_command(CMD.NUM_RUNNING, short_help=get_first_doc_line(self._pool.__class__.num_running.fget))
def _add_simple_commands(self) -> None:
"""
Adds the commands that are only supported, if a `SimpleTaskPool` object is controlled.
These include commands mapping to the following pool methods:
- start
- stop
- stop_all
- func_name
"""
self._add_command(
CMD.START, short_help=get_first_doc_line(self._pool.__class__.start)
).add_optional_num_argument(
help="Number of tasks to start."
)
self._add_command(
CMD.STOP, short_help=get_first_doc_line(self._pool.__class__.stop)
).add_optional_num_argument(
help="Number of tasks to stop."
)
self._add_command(CMD.STOP_ALL, short_help=get_first_doc_line(self._pool.__class__.stop_all))
self._add_command(CMD.FUNC_NAME, short_help=get_first_doc_line(self._pool.__class__.func_name.fget))
def _add_advanced_commands(self) -> None:
"""
Adds the commands that are only supported, if a `TaskPool` object is controlled.
These include commands mapping to the following pool methods:
- ...
"""
raise NotImplementedError
def _init_parser(self, client_terminal_width: int) -> None:
"""
Initializes and fully configures the `CommandParser` responsible for handling the input.
Depending on what specific task pool class is controlled by the server, different commands are added.
Args:
client_terminal_width:
The number of columns of the client's terminal to be able to nicely format messages from the parser.
"""
parser_kwargs = {
'prog': '',
SESSION_WRITER: self._writer,
CLIENT_INFO.TERMINAL_WIDTH: client_terminal_width,
}
self._parser = CommandParser(**parser_kwargs)
self._subparsers = self._parser.add_subparsers(title="Commands", dest=CMD.CMD)
self._add_base_commands()
if isinstance(self._pool, TaskPool):
self._add_advanced_commands()
elif isinstance(self._pool, SimpleTaskPool):
self._add_simple_commands()
elif isinstance(self._pool, BaseTaskPool):
raise UnknownTaskPoolClass(f"No interface defined for {self._pool.__class__.__name__}")
else:
raise NotATaskPool(f"Not a task pool instance: {self._pool}")
async def client_handshake(self) -> None:
"""
This method must be invoked before starting any other client interaction.
Client info is retrieved, server info is sent back, and the `CommandParser` is initialized and configured.
"""
client_info = json.loads((await self._reader.read(SESSION_MSG_BYTES)).decode().strip())
log.debug("%s connected", self._client_class_name)
self._init_parser(client_info[CLIENT_INFO.TERMINAL_WIDTH])
self._writer.write(str(self._pool).encode())
await self._writer.drain()
async def _write_function_output(self, func: Callable, *args, **kwargs) -> None:
"""
Acts as a wrapper around a call to a specific task pool method.
The method is called and any exception is caught and saved. If there is no output and no exception caught, a
generic confirmation message is sent back to the client. Otherwise the output or a string representation of
the exception caught is sent back.
Args:
func:
Reference to the task pool method.
*args (optional):
Any positional arguments to call the method with.
*+kwargs (optional):
Any keyword-arguments to call the method with.
"""
output = await return_or_exception(func, *args, **kwargs)
self._writer.write(b"ok" if output is None else str(output).encode())
async def _cmd_name(self, **_kwargs) -> None:
"""Maps to the `__str__` method of any task pool class."""
log.debug("%s requests task pool name", self._client_class_name)
await self._write_function_output(self._pool.__class__.__str__, self._pool)
async def _cmd_pool_size(self, **kwargs) -> None:
"""Maps to the `pool_size` property of any task pool class."""
num = kwargs.get(NUM)
if num is None:
log.debug("%s requests pool size", self._client_class_name)
await self._write_function_output(self._pool.__class__.pool_size.fget, self._pool)
else:
log.debug("%s requests setting pool size to %s", self._client_class_name, num)
await self._write_function_output(self._pool.__class__.pool_size.fset, self._pool, num)
async def _cmd_num_running(self, **_kwargs) -> None:
"""Maps to the `num_running` property of any task pool class."""
log.debug("%s requests number of running tasks", self._client_class_name)
await self._write_function_output(self._pool.__class__.num_running.fget, self._pool)
async def _cmd_start(self, **kwargs) -> None:
"""Maps to the `start` method of the `SimpleTaskPool` class."""
num = kwargs[NUM]
log.debug("%s requests starting %s %s", self._client_class_name, num, tasks_str(num))
await self._write_function_output(self._pool.start, num)
async def _cmd_stop(self, **kwargs) -> None:
"""Maps to the `stop` method of the `SimpleTaskPool` class."""
num = kwargs[NUM]
log.debug("%s requests stopping %s %s", self._client_class_name, num, tasks_str(num))
await self._write_function_output(self._pool.stop, num)
async def _cmd_stop_all(self, **_kwargs) -> None:
"""Maps to the `stop_all` method of the `SimpleTaskPool` class."""
log.debug("%s requests stopping all tasks", self._client_class_name)
await self._write_function_output(self._pool.stop_all)
async def _cmd_func_name(self, **_kwargs) -> None:
"""Maps to the `func_name` method of the `SimpleTaskPool` class."""
log.debug("%s requests pool function name", self._client_class_name)
await self._write_function_output(self._pool.__class__.func_name.fget, self._pool)
async def _execute_command(self, **kwargs) -> None:
"""
Dynamically gets the correct `_cmd_...` method depending on the name of the command passed and executes it.
Args:
**kwargs:
Must include the `CMD.CMD` key mapping the the command name. The rest of the keyword-arguments is
simply passed into the method determined from the command name.
"""
method = getattr(self, f'_cmd_{kwargs.pop(CMD.CMD).replace("-", "_")}')
await method(**kwargs)
async def _parse_command(self, msg: str) -> None:
"""
Takes a message from the client and attempts to parse it.
If a parsing error occurs, it is returned to the client. If the `HelpRequested` exception was raised by the
`CommandParser`, nothing else happens. Otherwise, the `_execute_command` method is called with the entire
dictionary of keyword-arguments returned by the `CommandParser` passed into it.
Args:
msg:
The non-empty string read from the client stream.
"""
try:
kwargs = vars(self._parser.parse_args(msg.split(' ')))
except ArgumentError as e:
self._writer.write(str(e).encode())
return
except HelpRequested:
return
await self._execute_command(**kwargs)
async def listen(self) -> None:
"""
Enters the main control loop that only ends if either the server or the client disconnect.
Messages from the client are read and passed into the `_parse_command` method, which handles the rest.
This method should be called, when the client connection was established and the handshake was successful.
It will obviously block indefinitely.
"""
while self._control_server.is_serving():
msg = (await self._reader.read(SESSION_MSG_BYTES)).decode().strip()
if not msg:
log.debug("%s disconnected", self._client_class_name)
break
await self._parse_command(msg)
await self._writer.drain()

View File

@ -1,127 +0,0 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
This module contains the the definition of the `CommandParser` class used in a control server session.
"""
from argparse import Action, ArgumentParser, ArgumentDefaultsHelpFormatter, HelpFormatter
from asyncio.streams import StreamWriter
from typing import Type, TypeVar
from .constants import SESSION_WRITER, CLIENT_INFO
from .exceptions import HelpRequested
FmtCls = TypeVar('FmtCls', bound=Type[HelpFormatter])
FORMATTER_CLASS = 'formatter_class'
NUM = 'num'
class CommandParser(ArgumentParser):
"""
Subclass of the standard `argparse.ArgumentParser` for remote interaction.
Such a parser is not supposed to ever print to stdout/stderr, but instead direct all messages to a `StreamWriter`
instance passed to it during initialization.
Furthermore, it requires defining the width of the terminal, to adjust help formatting to the terminal size of a
connected client.
Finally, it offers some convenience methods and makes use of custom exceptions.
"""
@staticmethod
def help_formatter_factory(terminal_width: int, base_cls: FmtCls = None) -> FmtCls:
"""
Constructs and returns a subclass of `argparse.HelpFormatter` with a fixed terminal width argument.
Although a custom formatter class can be explicitly passed into the `ArgumentParser` constructor, this is not
as convenient, when making use of sub-parsers.
Args:
terminal_width:
The number of columns of the terminal to which to adjust help formatting.
base_cls (optional):
The base class to use for inheritance. By default `argparse.ArgumentDefaultsHelpFormatter` is used.
Returns:
The subclass of `base_cls` which fixes the constructor's `width` keyword-argument to `terminal_width`.
"""
if base_cls is None:
base_cls = ArgumentDefaultsHelpFormatter
class ClientHelpFormatter(base_cls):
def __init__(self, *args, **kwargs) -> None:
kwargs['width'] = terminal_width
super().__init__(*args, **kwargs)
return ClientHelpFormatter
def __init__(self, parent: 'CommandParser' = None, **kwargs) -> None:
"""
Sets additional internal attributes depending on whether a parent-parser was defined.
The `help_formatter_factory` is called and the returned class is mapped to the `FORMATTER_CLASS` keyword.
By default, `exit_on_error` is set to `False` (as opposed to how the parent class handles it).
Args:
parent (optional):
An instance of the same class. Intended to be passed as a keyword-argument into the `add_parser` method
of the subparsers action returned by the `ArgumentParser.add_subparsers` method. If this is present,
the `SESSION_WRITER` and `CLIENT_INFO.TERMINAL_WIDTH` keywords must not be present in `kwargs`.
**kwargs(optional):
In addition to the regular `ArgumentParser` constructor parameters, this method expects the instance of
the `StreamWriter` as well as the terminal width both to be passed explicitly, if the `parent` argument
is empty.
"""
self._session_writer: StreamWriter = parent.session_writer if parent else kwargs.pop(SESSION_WRITER)
self._terminal_width: int = parent.terminal_width if parent else kwargs.pop(CLIENT_INFO.TERMINAL_WIDTH)
kwargs[FORMATTER_CLASS] = self.help_formatter_factory(self._terminal_width, kwargs.get(FORMATTER_CLASS))
kwargs.setdefault('exit_on_error', False)
super().__init__(**kwargs)
@property
def session_writer(self) -> StreamWriter:
"""Returns the predefined stream writer object of the control session."""
return self._session_writer
@property
def terminal_width(self) -> int:
"""Returns the predefined terminal width."""
return self._terminal_width
def _print_message(self, message: str, *args, **kwargs) -> None:
"""This is overridden to ensure that no messages are sent to stdout/stderr, but always to the stream writer."""
if message:
self._session_writer.write(message.encode())
def exit(self, status: int = 0, message: str = None) -> None:
"""This is overridden to prevent system exit to be invoked."""
if message:
self._print_message(message)
def print_help(self, file=None) -> None:
"""This just adds the custom `HelpRequested` exception after the parent class' method."""
super().print_help(file)
raise HelpRequested
def add_optional_num_argument(self, *name_or_flags: str, **kwargs) -> Action:
"""Convenience method for `add_argument` setting the name, `nargs`, `default`, and `type`, unless specified."""
if not name_or_flags:
name_or_flags = (NUM, )
kwargs.setdefault('nargs', '?')
kwargs.setdefault('default', 1)
kwargs.setdefault('type', int)
return self.add_argument(*name_or_flags, **kwargs)

View File

View File

@ -0,0 +1,45 @@
from pathlib import Path
from unittest import IsolatedAsyncioTestCase
from unittest.mock import AsyncMock, MagicMock, patch
from asyncio_taskpool.control.client import TCPControlClient, UnixControlClient
from asyncio_taskpool.control import __main__ as module
class CLITestCase(IsolatedAsyncioTestCase):
def test_parse_cli(self):
socket_path = '/some/path/to.sock'
args = [module.UNIX, socket_path]
expected_kwargs = {
module.CLIENT_CLASS: UnixControlClient,
module.SOCKET_PATH: Path(socket_path)
}
parsed_kwargs = module.parse_cli(args)
self.assertDictEqual(expected_kwargs, parsed_kwargs)
host, port = '1.2.3.4', '1234'
args = [module.TCP, host, port]
expected_kwargs = {
module.CLIENT_CLASS: TCPControlClient,
module.HOST: host,
module.PORT: int(port)
}
parsed_kwargs = module.parse_cli(args)
self.assertDictEqual(expected_kwargs, parsed_kwargs)
with patch('sys.stderr'):
with self.assertRaises(SystemExit):
module.parse_cli(['invalid', 'foo', 'bar'])
@patch.object(module, 'parse_cli')
async def test_main(self, mock_parse_cli: MagicMock):
mock_client_start = AsyncMock()
mock_client = MagicMock(start=mock_client_start)
mock_client_cls = MagicMock(return_value=mock_client)
mock_client_kwargs = {'foo': 123, 'bar': 456, 'baz': 789}
mock_parse_cli.return_value = {module.CLIENT_CLASS: mock_client_cls} | mock_client_kwargs
self.assertIsNone(await module.main())
mock_parse_cli.assert_called_once_with()
mock_client_cls.assert_called_once_with(**mock_client_kwargs)
mock_client_start.assert_awaited_once_with()

View File

@ -20,14 +20,15 @@ Unittests for the `asyncio_taskpool.client` module.
import json import json
import os
import shutil import shutil
import sys import sys
from pathlib import Path from pathlib import Path
from unittest import IsolatedAsyncioTestCase from unittest import IsolatedAsyncioTestCase, skipIf
from unittest.mock import AsyncMock, MagicMock, patch from unittest.mock import AsyncMock, MagicMock, call, patch
from asyncio_taskpool import client from asyncio_taskpool.control import client
from asyncio_taskpool.constants import CLIENT_INFO, SESSION_MSG_BYTES from asyncio_taskpool.internals.constants import CLIENT_INFO, SESSION_MSG_BYTES
FOO, BAR = 'foo', 'bar' FOO, BAR = 'foo', 'bar'
@ -36,7 +37,7 @@ FOO, BAR = 'foo', 'bar'
class ControlClientTestCase(IsolatedAsyncioTestCase): class ControlClientTestCase(IsolatedAsyncioTestCase):
def setUp(self) -> None: def setUp(self) -> None:
self.abstract_patcher = patch('asyncio_taskpool.client.ControlClient.__abstractmethods__', set()) self.abstract_patcher = patch('asyncio_taskpool.control.client.ControlClient.__abstractmethods__', set())
self.print_patcher = patch.object(client, 'print') self.print_patcher = patch.object(client, 'print')
self.mock_abstract_methods = self.abstract_patcher.start() self.mock_abstract_methods = self.abstract_patcher.start()
self.mock_print = self.print_patcher.start() self.mock_print = self.print_patcher.start()
@ -54,7 +55,7 @@ class ControlClientTestCase(IsolatedAsyncioTestCase):
def test_client_info(self): def test_client_info(self):
self.assertEqual({CLIENT_INFO.TERMINAL_WIDTH: shutil.get_terminal_size().columns}, self.assertEqual({CLIENT_INFO.TERMINAL_WIDTH: shutil.get_terminal_size().columns},
client.ControlClient.client_info()) client.ControlClient._client_info())
async def test_abstract(self): async def test_abstract(self):
with self.assertRaises(NotImplementedError): with self.assertRaises(NotImplementedError):
@ -64,16 +65,19 @@ class ControlClientTestCase(IsolatedAsyncioTestCase):
self.assertEqual(self.kwargs, self.client._conn_kwargs) self.assertEqual(self.kwargs, self.client._conn_kwargs)
self.assertFalse(self.client._connected) self.assertFalse(self.client._connected)
@patch.object(client.ControlClient, 'client_info') @patch.object(client.ControlClient, '_client_info')
async def test__server_handshake(self, mock_client_info: MagicMock): async def test__server_handshake(self, mock__client_info: MagicMock):
mock_client_info.return_value = mock_info = {FOO: 1, BAR: 9999} mock__client_info.return_value = mock_info = {FOO: 1, BAR: 9999}
self.assertIsNone(await self.client._server_handshake(self.mock_reader, self.mock_writer)) self.assertIsNone(await self.client._server_handshake(self.mock_reader, self.mock_writer))
self.assertTrue(self.client._connected) self.assertTrue(self.client._connected)
mock_client_info.assert_called_once_with() mock__client_info.assert_called_once_with()
self.mock_write.assert_called_once_with(json.dumps(mock_info).encode()) self.mock_write.assert_called_once_with(json.dumps(mock_info).encode())
self.mock_drain.assert_awaited_once_with() self.mock_drain.assert_awaited_once_with()
self.mock_read.assert_awaited_once_with(SESSION_MSG_BYTES) self.mock_read.assert_awaited_once_with(SESSION_MSG_BYTES)
self.mock_print.assert_called_once_with("Connected to", self.mock_read.return_value.decode()) self.mock_print.assert_has_calls([
call("Connected to", self.mock_read.return_value.decode()),
call("Type '-h' to get help and usage instructions for all available commands.\n")
])
@patch.object(client, 'input') @patch.object(client, 'input')
def test__get_command(self, mock_input: MagicMock): def test__get_command(self, mock_input: MagicMock):
@ -171,6 +175,44 @@ class ControlClientTestCase(IsolatedAsyncioTestCase):
self.mock_print.assert_called_once_with("Disconnected from control server.") self.mock_print.assert_called_once_with("Disconnected from control server.")
class TCPControlClientTestCase(IsolatedAsyncioTestCase):
def setUp(self) -> None:
self.base_init_patcher = patch.object(client.ControlClient, '__init__')
self.mock_base_init = self.base_init_patcher.start()
self.host, self.port = 'localhost', 12345
self.kwargs = {FOO: 123, BAR: 456}
self.client = client.TCPControlClient(host=self.host, port=self.port, **self.kwargs)
def tearDown(self) -> None:
self.base_init_patcher.stop()
def test_init(self):
self.assertEqual(self.host, self.client._host)
self.assertEqual(self.port, self.client._port)
self.mock_base_init.assert_called_once_with(**self.kwargs)
@patch.object(client, 'print')
@patch.object(client, 'open_connection')
async def test__open_connection(self, mock_open_connection: AsyncMock, mock_print: MagicMock):
mock_open_connection.return_value = expected_output = 'something'
kwargs = {'a': 1, 'b': 2}
output = await self.client._open_connection(**kwargs)
self.assertEqual(expected_output, output)
mock_open_connection.assert_awaited_once_with(self.host, self.port, **kwargs)
mock_print.assert_not_called()
mock_open_connection.reset_mock()
mock_open_connection.side_effect = e = ConnectionError()
output1, output2 = await self.client._open_connection(**kwargs)
self.assertIsNone(output1)
self.assertIsNone(output2)
mock_open_connection.assert_awaited_once_with(self.host, self.port, **kwargs)
mock_print.assert_called_once_with(str(e), file=sys.stderr)
@skipIf(os.name == 'nt', "No Unix sockets on Windows :(")
class UnixControlClientTestCase(IsolatedAsyncioTestCase): class UnixControlClientTestCase(IsolatedAsyncioTestCase):
def setUp(self) -> None: def setUp(self) -> None:
@ -188,9 +230,9 @@ class UnixControlClientTestCase(IsolatedAsyncioTestCase):
self.mock_base_init.assert_called_once_with(**self.kwargs) self.mock_base_init.assert_called_once_with(**self.kwargs)
@patch.object(client, 'print') @patch.object(client, 'print')
@patch.object(client, 'open_unix_connection') async def test__open_connection(self, mock_print: MagicMock):
async def test__open_connection(self, mock_open_unix_connection: AsyncMock, mock_print: MagicMock): expected_output = 'something'
mock_open_unix_connection.return_value = expected_output = 'something' self.client._open_unix_connection = mock_open_unix_connection = AsyncMock(return_value=expected_output)
kwargs = {'a': 1, 'b': 2} kwargs = {'a': 1, 'b': 2}
output = await self.client._open_connection(**kwargs) output = await self.client._open_connection(**kwargs)
self.assertEqual(expected_output, output) self.assertEqual(expected_output, output)

View File

@ -0,0 +1,289 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.control.parser` module.
"""
from argparse import ArgumentParser, HelpFormatter, ArgumentDefaultsHelpFormatter, RawTextHelpFormatter, SUPPRESS
from ast import literal_eval
from inspect import signature
from unittest import TestCase
from unittest.mock import MagicMock, call, patch
from typing import Iterable
from asyncio_taskpool.control import parser
from asyncio_taskpool.exceptions import HelpRequested, ParserError
from asyncio_taskpool.internals.helpers import resolve_dotted_path
from asyncio_taskpool.internals.types import ArgsT, CancelCB, CoroutineFunc, EndCB, KwArgsT
FOO, BAR = 'foo', 'bar'
class ControlServerTestCase(TestCase):
def setUp(self) -> None:
self.help_formatter_factory_patcher = patch.object(parser.ControlParser, 'help_formatter_factory')
self.mock_help_formatter_factory = self.help_formatter_factory_patcher.start()
self.mock_help_formatter_factory.return_value = RawTextHelpFormatter
self.stream_writer, self.terminal_width = MagicMock(), 420
self.kwargs = {
'stream_writer': self.stream_writer,
'terminal_width': self.terminal_width,
'formatter_class': FOO
}
self.parser = parser.ControlParser(**self.kwargs)
def tearDown(self) -> None:
self.help_formatter_factory_patcher.stop()
def test_help_formatter_factory(self):
self.help_formatter_factory_patcher.stop()
class MockBaseClass(HelpFormatter):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
terminal_width = 123456789
cls = parser.ControlParser.help_formatter_factory(terminal_width, MockBaseClass)
self.assertTrue(issubclass(cls, MockBaseClass))
instance = cls('prog')
self.assertEqual(terminal_width, getattr(instance, '_width'))
cls = parser.ControlParser.help_formatter_factory(terminal_width)
self.assertTrue(issubclass(cls, ArgumentDefaultsHelpFormatter))
instance = cls('prog')
self.assertEqual(terminal_width, getattr(instance, '_width'))
def test_init(self):
self.assertIsInstance(self.parser, ArgumentParser)
self.assertEqual(self.stream_writer, self.parser._stream_writer)
self.assertEqual(self.terminal_width, self.parser._terminal_width)
self.mock_help_formatter_factory.assert_called_once_with(self.terminal_width, FOO)
self.assertFalse(getattr(self.parser, 'exit_on_error'))
self.assertEqual(RawTextHelpFormatter, getattr(self.parser, 'formatter_class'))
self.assertSetEqual(set(), self.parser._flags)
self.assertIsNone(self.parser._commands)
@patch.object(parser, 'get_first_doc_line')
def test_add_function_command(self, mock_get_first_doc_line: MagicMock):
def foo_bar(): pass
mock_subparser = MagicMock()
mock_add_parser = MagicMock(return_value=mock_subparser)
self.parser._commands = MagicMock(add_parser=mock_add_parser)
mock_get_first_doc_line.return_value = mock_help = 'help 123'
kwargs = {FOO: 1, BAR: 2, parser.DESCRIPTION: FOO + BAR}
expected_name = 'foo-bar'
expected_kwargs = {parser.NAME: expected_name, parser.PROG: expected_name, parser.HELP: mock_help} | kwargs
to_omit = ['abc', 'xyz']
output = self.parser.add_function_command(foo_bar, omit_params=to_omit, **kwargs)
self.assertEqual(mock_subparser, output)
mock_add_parser.assert_called_once_with(**expected_kwargs)
mock_subparser.add_function_args.assert_called_once_with(foo_bar, to_omit)
@patch.object(parser, 'get_first_doc_line')
def test_add_property_command(self, mock_get_first_doc_line: MagicMock):
def get_prop(_self): pass
def set_prop(_self, _value): pass
prop = property(get_prop)
mock_subparser = MagicMock()
mock_add_parser = MagicMock(return_value=mock_subparser)
self.parser._commands = MagicMock(add_parser=mock_add_parser)
mock_get_first_doc_line.return_value = mock_help = 'help 123'
kwargs = {FOO: 1, BAR: 2, parser.DESCRIPTION: FOO + BAR}
expected_name = 'get-prop'
expected_kwargs = {parser.NAME: expected_name, parser.PROG: expected_name, parser.HELP: mock_help} | kwargs
output = self.parser.add_property_command(prop, **kwargs)
self.assertEqual(mock_subparser, output)
mock_get_first_doc_line.assert_called_once_with(get_prop)
mock_add_parser.assert_called_once_with(**expected_kwargs)
mock_subparser.add_function_arg.assert_not_called()
mock_get_first_doc_line.reset_mock()
mock_add_parser.reset_mock()
prop = property(get_prop, set_prop)
expected_help = f"Get/set the `.{expected_name}` property"
expected_kwargs = {parser.NAME: expected_name, parser.PROG: expected_name, parser.HELP: expected_help} | kwargs
output = self.parser.add_property_command(prop, **kwargs)
self.assertEqual(mock_subparser, output)
mock_get_first_doc_line.assert_has_calls([call(get_prop), call(set_prop)])
mock_add_parser.assert_called_once_with(**expected_kwargs)
mock_subparser.add_function_arg.assert_called_once_with(
tuple(signature(set_prop).parameters.values())[1],
nargs='?',
default=SUPPRESS,
help=f"If provided: {mock_help} If omitted: {mock_help}"
)
@patch.object(parser.ControlParser, 'add_property_command')
@patch.object(parser.ControlParser, 'add_function_command')
def test_add_class_commands(self, mock_add_function_command: MagicMock, mock_add_property_command: MagicMock):
class FooBar:
some_attribute = None
def _protected(self, _): pass
def __private(self, _): pass
def to_omit(self, _): pass
def method(self, _): pass
@property
def prop(self): return None
mock_set_defaults = MagicMock()
mock_subparser = MagicMock(set_defaults=mock_set_defaults)
mock_add_function_command.return_value = mock_add_property_command.return_value = mock_subparser
x = 'x'
common_kwargs = {parser.STREAM_WRITER: self.parser._stream_writer,
parser.CLIENT_INFO.TERMINAL_WIDTH: self.parser._terminal_width}
expected_output = {'method': mock_subparser, 'prop': mock_subparser}
output = self.parser.add_class_commands(FooBar, public_only=True, omit_members=['to_omit'], member_arg_name=x)
self.assertDictEqual(expected_output, output)
mock_add_function_command.assert_called_once_with(FooBar.method, **common_kwargs)
mock_add_property_command.assert_called_once_with(FooBar.prop, FooBar.__name__, **common_kwargs)
mock_set_defaults.assert_has_calls([call(**{x: FooBar.method}), call(**{x: FooBar.prop})])
@patch.object(parser.ArgumentParser, 'add_subparsers')
def test_add_subparsers(self, mock_base_add_subparsers: MagicMock):
args, kwargs = [1, 2, 42], {FOO: 123, BAR: 456}
mock_base_add_subparsers.return_value = mock_action = MagicMock()
output = self.parser.add_subparsers(*args, **kwargs)
self.assertEqual(mock_action, output)
mock_base_add_subparsers.assert_called_once_with(*args, **kwargs)
def test__print_message(self):
self.stream_writer.write = MagicMock()
self.assertIsNone(self.parser._print_message(''))
self.stream_writer.write.assert_not_called()
msg = 'foo bar baz'
self.assertIsNone(self.parser._print_message(msg))
self.stream_writer.write.assert_called_once_with(msg.encode())
@patch.object(parser.ControlParser, '_print_message')
def test_exit(self, mock__print_message: MagicMock):
self.assertIsNone(self.parser.exit(123, ''))
mock__print_message.assert_not_called()
msg = 'foo bar baz'
self.assertIsNone(self.parser.exit(123, msg))
mock__print_message.assert_called_once_with(msg)
@patch.object(parser.ArgumentParser, 'error')
def test_error(self, mock_supercls_error: MagicMock):
with self.assertRaises(ParserError):
self.parser.error(FOO + BAR)
mock_supercls_error.assert_called_once_with(message=FOO + BAR)
@patch.object(parser.ArgumentParser, 'print_help')
def test_print_help(self, mock_print_help: MagicMock):
arg = MagicMock()
with self.assertRaises(HelpRequested):
self.parser.print_help(arg)
mock_print_help.assert_called_once_with(arg)
@patch.object(parser, '_get_type_from_annotation')
@patch.object(parser.ArgumentParser, 'add_argument')
def test_add_function_arg(self, mock_add_argument: MagicMock, mock__get_type_from_annotation: MagicMock):
mock_add_argument.return_value = expected_output = 'action'
mock__get_type_from_annotation.return_value = mock_type = 'fake'
foo_type, args_type, bar_type, baz_type, boo_type = tuple, str, int, float, complex
bar_default, baz_default, boo_default = 1, 0.1, 1j
def func(foo: foo_type, *args: args_type, bar: bar_type = bar_default, baz: baz_type = baz_default,
boo: boo_type = boo_default, flag: bool = False):
return foo, args, bar, baz, boo, flag
param_foo, param_args, param_bar, param_baz, param_boo, param_flag = signature(func).parameters.values()
kwargs = {FOO + BAR: 'xyz'}
self.assertEqual(expected_output, self.parser.add_function_arg(param_foo, **kwargs))
mock_add_argument.assert_called_once_with('foo', type=mock_type, **kwargs)
mock__get_type_from_annotation.assert_called_once_with(foo_type)
mock_add_argument.reset_mock()
mock__get_type_from_annotation.reset_mock()
self.assertEqual(expected_output, self.parser.add_function_arg(param_args, **kwargs))
mock_add_argument.assert_called_once_with('args', nargs='*', type=mock_type, **kwargs)
mock__get_type_from_annotation.assert_called_once_with(args_type)
mock_add_argument.reset_mock()
mock__get_type_from_annotation.reset_mock()
self.assertEqual(expected_output, self.parser.add_function_arg(param_bar, **kwargs))
mock_add_argument.assert_called_once_with('-b', '--bar', default=bar_default, type=mock_type, **kwargs)
mock__get_type_from_annotation.assert_called_once_with(bar_type)
mock_add_argument.reset_mock()
mock__get_type_from_annotation.reset_mock()
self.assertEqual(expected_output, self.parser.add_function_arg(param_baz, **kwargs))
mock_add_argument.assert_called_once_with('-B', '--baz', default=baz_default, type=mock_type, **kwargs)
mock__get_type_from_annotation.assert_called_once_with(baz_type)
mock_add_argument.reset_mock()
mock__get_type_from_annotation.reset_mock()
self.assertEqual(expected_output, self.parser.add_function_arg(param_boo, **kwargs))
mock_add_argument.assert_called_once_with('--boo', default=boo_default, type=mock_type, **kwargs)
mock__get_type_from_annotation.assert_called_once_with(boo_type)
mock_add_argument.reset_mock()
mock__get_type_from_annotation.reset_mock()
self.assertEqual(expected_output, self.parser.add_function_arg(param_flag, **kwargs))
mock_add_argument.assert_called_once_with('-f', '--flag', action='store_true', **kwargs)
mock__get_type_from_annotation.assert_not_called()
@patch.object(parser.ControlParser, 'add_function_arg')
def test_add_function_args(self, mock_add_function_arg: MagicMock):
def func(foo: str, *args: int, bar: float = 0.1):
return foo, args, bar
_, param_args, param_bar = signature(func).parameters.values()
self.assertIsNone(self.parser.add_function_args(func, omit=['foo']))
mock_add_function_arg.assert_has_calls([
call(param_args, help=repr(param_args.annotation)),
call(param_bar, help=repr(param_bar.annotation)),
])
class RestTestCase(TestCase):
def test__get_arg_type_wrapper(self):
type_wrap = parser._get_arg_type_wrapper(int)
self.assertEqual('int', type_wrap.__name__)
self.assertEqual(SUPPRESS, type_wrap(SUPPRESS))
self.assertEqual(13, type_wrap('13'))
@patch.object(parser, '_get_arg_type_wrapper')
def test__get_type_from_annotation(self, mock__get_arg_type_wrapper: MagicMock):
mock__get_arg_type_wrapper.return_value = expected_output = FOO + BAR
dotted_path_ann = [CoroutineFunc, EndCB, CancelCB]
literal_eval_ann = [ArgsT, KwArgsT, Iterable[ArgsT], Iterable[KwArgsT]]
any_other_ann = MagicMock()
for a in dotted_path_ann:
self.assertEqual(expected_output, parser._get_type_from_annotation(a))
mock__get_arg_type_wrapper.assert_has_calls(len(dotted_path_ann) * [call(resolve_dotted_path)])
mock__get_arg_type_wrapper.reset_mock()
for a in literal_eval_ann:
self.assertEqual(expected_output, parser._get_type_from_annotation(a))
mock__get_arg_type_wrapper.assert_has_calls(len(literal_eval_ann) * [call(literal_eval)])
mock__get_arg_type_wrapper.reset_mock()
self.assertEqual(expected_output, parser._get_type_from_annotation(any_other_ann))
mock__get_arg_type_wrapper.assert_called_once_with(any_other_ann)

View File

@ -21,12 +21,13 @@ Unittests for the `asyncio_taskpool.server` module.
import asyncio import asyncio
import logging import logging
import os
from pathlib import Path from pathlib import Path
from unittest import IsolatedAsyncioTestCase from unittest import IsolatedAsyncioTestCase, skipIf
from unittest.mock import AsyncMock, MagicMock, patch from unittest.mock import AsyncMock, MagicMock, patch
from asyncio_taskpool import server from asyncio_taskpool.control import server
from asyncio_taskpool.client import ControlClient, UnixControlClient from asyncio_taskpool.control.client import ControlClient, TCPControlClient, UnixControlClient
FOO, BAR = 'foo', 'bar' FOO, BAR = 'foo', 'bar'
@ -45,7 +46,7 @@ class ControlServerTestCase(IsolatedAsyncioTestCase):
server.log.setLevel(cls.log_lvl) server.log.setLevel(cls.log_lvl)
def setUp(self) -> None: def setUp(self) -> None:
self.abstract_patcher = patch('asyncio_taskpool.server.ControlServer.__abstractmethods__', set()) self.abstract_patcher = patch('asyncio_taskpool.control.server.ControlServer.__abstractmethods__', set())
self.mock_abstract_methods = self.abstract_patcher.start() self.mock_abstract_methods = self.abstract_patcher.start()
self.mock_pool = MagicMock() self.mock_pool = MagicMock()
self.kwargs = {FOO: 123, BAR: 456} self.kwargs = {FOO: 123, BAR: 456}
@ -119,6 +120,51 @@ class ControlServerTestCase(IsolatedAsyncioTestCase):
mock_create_task.assert_called_once_with(mock_awaitable) mock_create_task.assert_called_once_with(mock_awaitable)
class TCPControlServerTestCase(IsolatedAsyncioTestCase):
log_lvl: int
@classmethod
def setUpClass(cls) -> None:
cls.log_lvl = server.log.level
server.log.setLevel(999)
@classmethod
def tearDownClass(cls) -> None:
server.log.setLevel(cls.log_lvl)
def setUp(self) -> None:
self.base_init_patcher = patch.object(server.ControlServer, '__init__')
self.mock_base_init = self.base_init_patcher.start()
self.mock_pool = MagicMock()
self.host, self.port = 'localhost', 12345
self.kwargs = {FOO: 123, BAR: 456}
self.server = server.TCPControlServer(pool=self.mock_pool, host=self.host, port=self.port, **self.kwargs)
def tearDown(self) -> None:
self.base_init_patcher.stop()
def test__client_class(self):
self.assertEqual(TCPControlClient, self.server._client_class)
def test_init(self):
self.assertEqual(self.host, self.server._host)
self.assertEqual(self.port, self.server._port)
self.mock_base_init.assert_called_once_with(self.mock_pool, **self.kwargs)
@patch.object(server, 'start_server')
async def test__get_server_instance(self, mock_start_server: AsyncMock):
mock_start_server.return_value = expected_output = 'totally_a_server'
mock_callback, mock_kwargs = MagicMock(), {'a': 1, 'b': 2}
args = [mock_callback]
output = await self.server._get_server_instance(*args, **mock_kwargs)
self.assertEqual(expected_output, output)
mock_start_server.assert_called_once_with(mock_callback, self.host, self.port, **mock_kwargs)
def test__final_callback(self):
self.assertIsNone(self.server._final_callback())
@skipIf(os.name == 'nt', "No Unix sockets on Windows :(")
class UnixControlServerTestCase(IsolatedAsyncioTestCase): class UnixControlServerTestCase(IsolatedAsyncioTestCase):
log_lvl: int log_lvl: int
@ -137,7 +183,7 @@ class UnixControlServerTestCase(IsolatedAsyncioTestCase):
self.mock_pool = MagicMock() self.mock_pool = MagicMock()
self.path = '/tmp/asyncio_taskpool' self.path = '/tmp/asyncio_taskpool'
self.kwargs = {FOO: 123, BAR: 456} self.kwargs = {FOO: 123, BAR: 456}
self.server = server.UnixControlServer(pool=self.mock_pool, path=self.path, **self.kwargs) self.server = server.UnixControlServer(pool=self.mock_pool, socket_path=self.path, **self.kwargs)
def tearDown(self) -> None: def tearDown(self) -> None:
self.base_init_patcher.stop() self.base_init_patcher.stop()
@ -149,9 +195,9 @@ class UnixControlServerTestCase(IsolatedAsyncioTestCase):
self.assertEqual(Path(self.path), self.server._socket_path) self.assertEqual(Path(self.path), self.server._socket_path)
self.mock_base_init.assert_called_once_with(self.mock_pool, **self.kwargs) self.mock_base_init.assert_called_once_with(self.mock_pool, **self.kwargs)
@patch.object(server, 'start_unix_server') async def test__get_server_instance(self):
async def test__get_server_instance(self, mock_start_unix_server: AsyncMock): expected_output = 'totally_a_server'
mock_start_unix_server.return_value = expected_output = 'totally_a_server' self.server._start_unix_server = mock_start_unix_server = AsyncMock(return_value=expected_output)
mock_callback, mock_kwargs = MagicMock(), {'a': 1, 'b': 2} mock_callback, mock_kwargs = MagicMock(), {'a': 1, 'b': 2}
args = [mock_callback] args = [mock_callback]
output = await self.server._get_server_instance(*args, **mock_kwargs) output = await self.server._get_server_instance(*args, **mock_kwargs)

View File

@ -0,0 +1,207 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.session` module.
"""
import json
from argparse import ArgumentError, Namespace
from unittest import IsolatedAsyncioTestCase
from unittest.mock import AsyncMock, MagicMock, patch, call
from asyncio_taskpool.control import session
from asyncio_taskpool.internals.constants import CLIENT_INFO, CMD, SESSION_MSG_BYTES, STREAM_WRITER
from asyncio_taskpool.exceptions import HelpRequested
from asyncio_taskpool.pool import SimpleTaskPool
FOO, BAR = 'foo', 'bar'
class ControlServerTestCase(IsolatedAsyncioTestCase):
log_lvl: int
@classmethod
def setUpClass(cls) -> None:
cls.log_lvl = session.log.level
session.log.setLevel(999)
@classmethod
def tearDownClass(cls) -> None:
session.log.setLevel(cls.log_lvl)
def setUp(self) -> None:
self.mock_pool = MagicMock(spec=SimpleTaskPool(AsyncMock()))
self.mock_client_class_name = FOO + BAR
self.mock_server = MagicMock(pool=self.mock_pool,
client_class_name=self.mock_client_class_name)
self.mock_reader = MagicMock()
self.mock_writer = MagicMock()
self.session = session.ControlSession(self.mock_server, self.mock_reader, self.mock_writer)
def test_init(self):
self.assertEqual(self.mock_server, self.session._control_server)
self.assertEqual(self.mock_pool, self.session._pool)
self.assertEqual(self.mock_client_class_name, self.session._client_class_name)
self.assertEqual(self.mock_reader, self.session._reader)
self.assertEqual(self.mock_writer, self.session._writer)
self.assertIsNone(self.session._parser)
@patch.object(session, 'return_or_exception')
async def test__exec_method_and_respond(self, mock_return_or_exception: AsyncMock):
def method(self, arg1, arg2, *var_args, **rest): pass
test_arg1, test_arg2, test_var_args, test_rest = 123, 'xyz', [0.1, 0.2, 0.3], {'aaa': 1, 'bbb': 11}
kwargs = {'arg1': test_arg1, 'arg2': test_arg2, 'var_args': test_var_args} | test_rest
mock_return_or_exception.return_value = None
self.assertIsNone(await self.session._exec_method_and_respond(method, **kwargs))
mock_return_or_exception.assert_awaited_once_with(
method, self.mock_pool, test_arg1, test_arg2, *test_var_args, **test_rest
)
self.mock_writer.write.assert_called_once_with(session.CMD_OK)
@patch.object(session, 'return_or_exception')
async def test__exec_property_and_respond(self, mock_return_or_exception: AsyncMock):
def prop_get(_): pass
def prop_set(_): pass
prop = property(prop_get, prop_set)
kwargs = {'value': 'something'}
mock_return_or_exception.return_value = None
self.assertIsNone(await self.session._exec_property_and_respond(prop, **kwargs))
mock_return_or_exception.assert_awaited_once_with(prop_set, self.mock_pool, **kwargs)
self.mock_writer.write.assert_called_once_with(session.CMD_OK)
mock_return_or_exception.reset_mock()
self.mock_writer.write.reset_mock()
mock_return_or_exception.return_value = val = 420.69
self.assertIsNone(await self.session._exec_property_and_respond(prop))
mock_return_or_exception.assert_awaited_once_with(prop_get, self.mock_pool)
self.mock_writer.write.assert_called_once_with(str(val).encode())
@patch.object(session, 'ControlParser')
async def test_client_handshake(self, mock_parser_cls: MagicMock):
mock_add_subparsers, mock_add_class_commands = MagicMock(), MagicMock()
mock_parser = MagicMock(add_subparsers=mock_add_subparsers, add_class_commands=mock_add_class_commands)
mock_parser_cls.return_value = mock_parser
width = 5678
msg = ' ' + json.dumps({CLIENT_INFO.TERMINAL_WIDTH: width, FOO: BAR}) + ' '
mock_read = AsyncMock(return_value=msg.encode())
self.mock_reader.read = mock_read
self.mock_writer.drain = AsyncMock()
expected_parser_kwargs = {
STREAM_WRITER: self.mock_writer,
CLIENT_INFO.TERMINAL_WIDTH: width,
'prog': '',
'usage': f'[-h] [{CMD}] ...'
}
expected_subparsers_kwargs = {
'title': "Commands",
'metavar': "(A command followed by '-h' or '--help' will show command-specific help.)"
}
self.assertIsNone(await self.session.client_handshake())
self.assertEqual(mock_parser, self.session._parser)
mock_read.assert_awaited_once_with(SESSION_MSG_BYTES)
mock_parser_cls.assert_called_once_with(**expected_parser_kwargs)
mock_add_subparsers.assert_called_once_with(**expected_subparsers_kwargs)
mock_add_class_commands.assert_called_once_with(self.mock_pool.__class__)
self.mock_writer.write.assert_called_once_with(str(self.mock_pool).encode())
self.mock_writer.drain.assert_awaited_once_with()
@patch.object(session.ControlSession, '_exec_property_and_respond')
@patch.object(session.ControlSession, '_exec_method_and_respond')
async def test__parse_command(self, mock__exec_method_and_respond: AsyncMock,
mock__exec_property_and_respond: AsyncMock):
def method(_): pass
prop = property(method)
msg = 'asdf asd as a'
kwargs = {FOO: BAR, 'hello': 'python'}
mock_parse_args = MagicMock(return_value=Namespace(**{CMD: method}, **kwargs))
self.session._parser = MagicMock(parse_args=mock_parse_args)
self.mock_writer.write = MagicMock()
self.assertIsNone(await self.session._parse_command(msg))
mock_parse_args.assert_called_once_with(msg.split(' '))
self.mock_writer.write.assert_not_called()
mock__exec_method_and_respond.assert_awaited_once_with(method, **kwargs)
mock__exec_property_and_respond.assert_not_called()
mock__exec_method_and_respond.reset_mock()
mock_parse_args.reset_mock()
mock_parse_args.return_value = Namespace(**{CMD: prop}, **kwargs)
self.assertIsNone(await self.session._parse_command(msg))
mock_parse_args.assert_called_once_with(msg.split(' '))
self.mock_writer.write.assert_not_called()
mock__exec_method_and_respond.assert_not_called()
mock__exec_property_and_respond.assert_awaited_once_with(prop, **kwargs)
mock__exec_property_and_respond.reset_mock()
mock_parse_args.reset_mock()
bad_command = 'definitely not a function or property'
mock_parse_args.return_value = Namespace(**{CMD: bad_command}, **kwargs)
with patch.object(session, 'CommandError') as cmd_err_cls:
cmd_err_cls.return_value = exc = MagicMock()
self.assertIsNone(await self.session._parse_command(msg))
cmd_err_cls.assert_called_once_with(f"Unknown command object: {bad_command}")
mock_parse_args.assert_called_once_with(msg.split(' '))
mock__exec_method_and_respond.assert_not_called()
mock__exec_property_and_respond.assert_not_called()
self.mock_writer.write.assert_called_once_with(str(exc).encode())
mock__exec_property_and_respond.reset_mock()
mock_parse_args.reset_mock()
self.mock_writer.write.reset_mock()
mock_parse_args.side_effect = exc = ArgumentError(MagicMock(), "oops")
self.assertIsNone(await self.session._parse_command(msg))
mock_parse_args.assert_called_once_with(msg.split(' '))
self.mock_writer.write.assert_called_once_with(str(exc).encode())
mock__exec_method_and_respond.assert_not_awaited()
mock__exec_property_and_respond.assert_not_awaited()
self.mock_writer.write.reset_mock()
mock_parse_args.reset_mock()
mock_parse_args.side_effect = HelpRequested()
self.assertIsNone(await self.session._parse_command(msg))
mock_parse_args.assert_called_once_with(msg.split(' '))
self.mock_writer.write.assert_not_called()
mock__exec_method_and_respond.assert_not_awaited()
mock__exec_property_and_respond.assert_not_awaited()
@patch.object(session.ControlSession, '_parse_command')
async def test_listen(self, mock__parse_command: AsyncMock):
def make_reader_return_empty():
self.mock_reader.read.return_value = b''
self.mock_writer.drain = AsyncMock(side_effect=make_reader_return_empty)
msg = "fascinating"
self.mock_reader.read = AsyncMock(return_value=f' {msg} '.encode())
self.assertIsNone(await self.session.listen())
self.mock_reader.read.assert_has_awaits([call(SESSION_MSG_BYTES), call(SESSION_MSG_BYTES)])
mock__parse_command.assert_awaited_once_with(msg)
self.mock_writer.drain.assert_awaited_once_with()
self.mock_reader.read.reset_mock()
mock__parse_command.reset_mock()
self.mock_writer.drain.reset_mock()
self.mock_server.is_serving = MagicMock(return_value=False)
self.assertIsNone(await self.session.listen())
self.mock_reader.read.assert_not_awaited()
mock__parse_command.assert_not_awaited()
self.mock_writer.drain.assert_not_awaited()

View File

View File

@ -0,0 +1,84 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.group_register` module.
"""
from asyncio.locks import Lock
from unittest import IsolatedAsyncioTestCase
from unittest.mock import MagicMock, patch
from asyncio_taskpool.internals import group_register
FOO, BAR = 'foo', 'bar'
class TaskGroupRegisterTestCase(IsolatedAsyncioTestCase):
def setUp(self) -> None:
self.reg = group_register.TaskGroupRegister()
def test_init(self):
ids = [FOO, BAR, 1, 2]
reg = group_register.TaskGroupRegister(*ids)
self.assertSetEqual(set(ids), reg._ids)
self.assertIsInstance(reg._lock, Lock)
def test___contains__(self):
self.reg._ids = {1, 2, 3}
for i in self.reg._ids:
self.assertTrue(i in self.reg)
self.assertFalse(4 in self.reg)
@patch.object(group_register, 'iter', return_value=FOO)
def test___iter__(self, mock_iter: MagicMock):
self.assertEqual(FOO, self.reg.__iter__())
mock_iter.assert_called_once_with(self.reg._ids)
def test___len__(self):
self.reg._ids = [1, 2, 3, 4]
self.assertEqual(4, len(self.reg))
def test_add(self):
self.assertSetEqual(set(), self.reg._ids)
self.assertIsNone(self.reg.add(123))
self.assertSetEqual({123}, self.reg._ids)
def test_discard(self):
self.reg._ids = {123}
self.assertIsNone(self.reg.discard(0))
self.assertIsNone(self.reg.discard(999))
self.assertIsNone(self.reg.discard(123))
self.assertSetEqual(set(), self.reg._ids)
async def test_acquire(self):
self.assertFalse(self.reg._lock.locked())
await self.reg.acquire()
self.assertTrue(self.reg._lock.locked())
def test_release(self):
self.reg._lock._locked = True
self.assertTrue(self.reg._lock.locked())
self.reg.release()
self.assertFalse(self.reg._lock.locked())
async def test_contextmanager(self):
self.assertFalse(self.reg._lock.locked())
async with self.reg as nothing:
self.assertIsNone(nothing)
self.assertTrue(self.reg._lock.locked())
self.assertFalse(self.reg._lock.locked())

View File

@ -20,9 +20,9 @@ Unittests for the `asyncio_taskpool.helpers` module.
from unittest import IsolatedAsyncioTestCase from unittest import IsolatedAsyncioTestCase
from unittest.mock import MagicMock, AsyncMock, NonCallableMagicMock from unittest.mock import MagicMock, AsyncMock, NonCallableMagicMock, call, patch
from asyncio_taskpool import helpers from asyncio_taskpool.internals import helpers
class HelpersTestCase(IsolatedAsyncioTestCase): class HelpersTestCase(IsolatedAsyncioTestCase):
@ -87,14 +87,6 @@ class HelpersTestCase(IsolatedAsyncioTestCase):
self.assertIsNone(await helpers.join_queue(mock_queue)) self.assertIsNone(await helpers.join_queue(mock_queue))
mock_join.assert_awaited_once_with() mock_join.assert_awaited_once_with()
def test_task_str(self):
self.assertEqual("task", helpers.tasks_str(1))
self.assertEqual("tasks", helpers.tasks_str(0))
self.assertEqual("tasks", helpers.tasks_str(-1))
self.assertEqual("tasks", helpers.tasks_str(2))
self.assertEqual("tasks", helpers.tasks_str(-10))
self.assertEqual("tasks", helpers.tasks_str(42))
def test_get_first_doc_line(self): def test_get_first_doc_line(self):
expected_output = 'foo bar baz' expected_output = 'foo bar baz'
mock_obj = MagicMock(__doc__=f"""{expected_output} mock_obj = MagicMock(__doc__=f"""{expected_output}
@ -126,3 +118,13 @@ class HelpersTestCase(IsolatedAsyncioTestCase):
output = await helpers.return_or_exception(mock_func, *args, **kwargs) output = await helpers.return_or_exception(mock_func, *args, **kwargs)
self.assertEqual(test_exception, output) self.assertEqual(test_exception, output)
mock_func.assert_called_once_with(*args, **kwargs) mock_func.assert_called_once_with(*args, **kwargs)
def test_resolve_dotted_path(self):
from logging import WARNING
from urllib.request import urlopen
self.assertEqual(WARNING, helpers.resolve_dotted_path('logging.WARNING'))
self.assertEqual(urlopen, helpers.resolve_dotted_path('urllib.request.urlopen'))
with patch.object(helpers, 'import_module', return_value=object) as mock_import_module:
with self.assertRaises(AttributeError):
helpers.resolve_dotted_path('foo.bar.baz')
mock_import_module.assert_has_calls([call('foo'), call('foo.bar')])

View File

@ -27,7 +27,7 @@ from unittest.mock import PropertyMock, MagicMock, AsyncMock, patch, call
from typing import Type from typing import Type
from asyncio_taskpool import pool, exceptions from asyncio_taskpool import pool, exceptions
from asyncio_taskpool.constants import DATETIME_FORMAT from asyncio_taskpool.internals.constants import DATETIME_FORMAT
EMPTY_LIST, EMPTY_DICT, EMPTY_SET = [], {}, set() EMPTY_LIST, EMPTY_DICT, EMPTY_SET = [], {}, set()
@ -84,7 +84,6 @@ class BaseTaskPoolTestCase(CommonTestCase):
def test_init(self): def test_init(self):
self.assertEqual(0, self.task_pool._num_started) self.assertEqual(0, self.task_pool._num_started)
self.assertEqual(0, self.task_pool._num_cancellations)
self.assertFalse(self.task_pool._locked) self.assertFalse(self.task_pool._locked)
self.assertFalse(self.task_pool._closed) self.assertFalse(self.task_pool._closed)
@ -114,14 +113,14 @@ class BaseTaskPoolTestCase(CommonTestCase):
def test_pool_size(self): def test_pool_size(self):
self.pool_size_patcher.stop() self.pool_size_patcher.stop()
self.task_pool._pool_size = self.TEST_POOL_SIZE self.task_pool._enough_room._value = self.TEST_POOL_SIZE
self.assertEqual(self.TEST_POOL_SIZE, self.task_pool.pool_size) self.assertEqual(self.TEST_POOL_SIZE, self.task_pool.pool_size)
with self.assertRaises(ValueError): with self.assertRaises(ValueError):
self.task_pool.pool_size = -1 self.task_pool.pool_size = -1
self.task_pool.pool_size = new_size = 69 self.task_pool.pool_size = new_size = 69
self.assertEqual(new_size, self.task_pool._pool_size) self.assertEqual(new_size, self.task_pool._enough_room._value)
def test_is_locked(self): def test_is_locked(self):
self.task_pool._locked = FOO self.task_pool._locked = FOO
@ -145,30 +144,23 @@ class BaseTaskPoolTestCase(CommonTestCase):
self.task_pool._tasks_running = {1: FOO, 2: BAR, 3: BAZ} self.task_pool._tasks_running = {1: FOO, 2: BAR, 3: BAZ}
self.assertEqual(3, self.task_pool.num_running) self.assertEqual(3, self.task_pool.num_running)
def test_num_cancellations(self): def test_num_cancelled(self):
self.task_pool._num_cancellations = 3 self.task_pool._tasks_cancelled = {1: FOO, 2: BAR, 3: BAZ}
self.assertEqual(3, self.task_pool.num_cancellations) self.assertEqual(3, self.task_pool.num_cancelled)
def test_num_ended(self): def test_num_ended(self):
self.task_pool._tasks_ended = {1: FOO, 2: BAR, 3: BAZ} self.task_pool._tasks_ended = {1: FOO, 2: BAR, 3: BAZ}
self.assertEqual(3, self.task_pool.num_ended) self.assertEqual(3, self.task_pool.num_ended)
def test_num_finished(self):
self.task_pool._num_cancellations = num_cancellations = 69
num_ended = 420
self.task_pool._tasks_ended = {i: FOO for i in range(num_ended)}
self.task_pool._tasks_cancelled = mock_cancelled_dict = {1: FOO, 2: BAR, 3: BAZ}
self.assertEqual(num_ended - num_cancellations + len(mock_cancelled_dict), self.task_pool.num_finished)
def test_is_full(self): def test_is_full(self):
self.assertEqual(self.task_pool._enough_room.locked(), self.task_pool.is_full) self.assertEqual(self.task_pool._enough_room.locked(), self.task_pool.is_full)
def test_get_task_group_ids(self): def test_get_group_ids(self):
group_name, ids = 'abcdef', [1, 2, 3] group_name, ids = 'abcdef', [1, 2, 3]
self.task_pool._task_groups[group_name] = MagicMock(__iter__=lambda _: iter(ids)) self.task_pool._task_groups[group_name] = MagicMock(__iter__=lambda _: iter(ids))
self.assertEqual(set(ids), self.task_pool.get_task_group_ids(group_name)) self.assertEqual(set(ids), self.task_pool.get_group_ids(group_name))
with self.assertRaises(exceptions.InvalidGroupName): with self.assertRaises(exceptions.InvalidGroupName):
self.task_pool.get_task_group_ids('something else') self.task_pool.get_group_ids(group_name, 'something else')
async def test__check_start(self): async def test__check_start(self):
self.task_pool._closed = True self.task_pool._closed = True
@ -200,12 +192,10 @@ class BaseTaskPoolTestCase(CommonTestCase):
@patch.object(pool.BaseTaskPool, '_task_name', return_value=FOO) @patch.object(pool.BaseTaskPool, '_task_name', return_value=FOO)
async def test__task_cancellation(self, mock__task_name: MagicMock, mock_execute_optional: AsyncMock): async def test__task_cancellation(self, mock__task_name: MagicMock, mock_execute_optional: AsyncMock):
task_id, mock_task, mock_callback = 1, MagicMock(), MagicMock() task_id, mock_task, mock_callback = 1, MagicMock(), MagicMock()
self.task_pool._num_cancellations = cancelled = 3
self.task_pool._tasks_running[task_id] = mock_task self.task_pool._tasks_running[task_id] = mock_task
self.assertIsNone(await self.task_pool._task_cancellation(task_id, mock_callback)) self.assertIsNone(await self.task_pool._task_cancellation(task_id, mock_callback))
self.assertNotIn(task_id, self.task_pool._tasks_running) self.assertNotIn(task_id, self.task_pool._tasks_running)
self.assertEqual(mock_task, self.task_pool._tasks_cancelled[task_id]) self.assertEqual(mock_task, self.task_pool._tasks_cancelled[task_id])
self.assertEqual(cancelled + 1, self.task_pool._num_cancellations)
mock__task_name.assert_called_with(task_id) mock__task_name.assert_called_with(task_id)
mock_execute_optional.assert_awaited_once_with(mock_callback, args=(task_id, )) mock_execute_optional.assert_awaited_once_with(mock_callback, args=(task_id, ))
@ -603,28 +593,34 @@ class TaskPoolTestCase(CommonTestCase):
@patch.object(pool, 'star_function') @patch.object(pool, 'star_function')
@patch.object(pool.TaskPool, '_start_task') @patch.object(pool.TaskPool, '_start_task')
@patch.object(pool, 'Semaphore')
@patch.object(pool.TaskPool, '_get_map_end_callback') @patch.object(pool.TaskPool, '_get_map_end_callback')
async def test__queue_consumer(self, mock__get_map_end_callback: MagicMock, mock_semaphore_cls: MagicMock, @patch.object(pool, 'Semaphore')
async def test__queue_consumer(self, mock_semaphore_cls: MagicMock, mock__get_map_end_callback: MagicMock,
mock__start_task: AsyncMock, mock_star_function: MagicMock): mock__start_task: AsyncMock, mock_star_function: MagicMock):
mock__get_map_end_callback.return_value = map_cb = MagicMock()
mock_semaphore_cls.return_value = semaphore = Semaphore(3) mock_semaphore_cls.return_value = semaphore = Semaphore(3)
mock_star_function.return_value = awaitable = 'totally an awaitable' mock__get_map_end_callback.return_value = map_cb = MagicMock()
arg1, arg2 = 123456789, 'function argument' awaitable = 'totally an awaitable'
mock_star_function.side_effect = [awaitable, awaitable, Exception()]
arg1, arg2, bad = 123456789, 'function argument', None
mock_q_maxsize = 3 mock_q_maxsize = 3
mock_q = MagicMock(__aenter__=AsyncMock(side_effect=[arg1, arg2, pool.TaskPool._QUEUE_END_SENTINEL]), mock_q = MagicMock(__aenter__=AsyncMock(side_effect=[arg1, arg2, bad, pool.TaskPool._QUEUE_END_SENTINEL]),
__aexit__=AsyncMock(), maxsize=mock_q_maxsize) __aexit__=AsyncMock(), maxsize=mock_q_maxsize)
group_name, mock_func, stars = 'whatever', MagicMock(), 3 group_name, mock_func, stars = 'whatever', MagicMock(__name__="mock"), 3
end_cb, cancel_cb = MagicMock(), MagicMock() end_cb, cancel_cb = MagicMock(), MagicMock()
self.assertIsNone(await self.task_pool._queue_consumer(mock_q, group_name, mock_func, stars, end_cb, cancel_cb)) self.assertIsNone(await self.task_pool._queue_consumer(mock_q, group_name, mock_func, stars, end_cb, cancel_cb))
# We expect the semaphore to be acquired 3 times, then be released once after the exception occurs, then
# acquired once more when the `_QUEUE_END_SENTINEL` is reached. Since we initialized it with a value of 3,
# at the end of the loop, we expect it be locked.
self.assertTrue(semaphore.locked()) self.assertTrue(semaphore.locked())
mock_semaphore_cls.assert_called_once_with(mock_q_maxsize)
mock__get_map_end_callback.assert_called_once_with(semaphore, actual_end_callback=end_cb) mock__get_map_end_callback.assert_called_once_with(semaphore, actual_end_callback=end_cb)
mock__start_task.assert_has_awaits(2 * [ mock__start_task.assert_has_awaits(2 * [
call(awaitable, group_name=group_name, ignore_lock=True, end_callback=map_cb, cancel_callback=cancel_cb) call(awaitable, group_name=group_name, ignore_lock=True, end_callback=map_cb, cancel_callback=cancel_cb)
]) ])
mock_star_function.assert_has_calls([ mock_star_function.assert_has_calls([
call(mock_func, arg1, arg_stars=stars), call(mock_func, arg1, arg_stars=stars),
call(mock_func, arg2, arg_stars=stars) call(mock_func, arg2, arg_stars=stars),
call(mock_func, bad, arg_stars=stars)
]) ])
@patch.object(pool, 'create_task') @patch.object(pool, 'create_task')

View File

@ -0,0 +1,43 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.queue_context` module.
"""
from unittest import IsolatedAsyncioTestCase
from unittest.mock import MagicMock, patch
from asyncio_taskpool.queue_context import Queue
class QueueTestCase(IsolatedAsyncioTestCase):
def test_item_processed(self):
queue = Queue()
queue._unfinished_tasks = 1000
queue.item_processed()
self.assertEqual(999, queue._unfinished_tasks)
@patch.object(Queue, 'item_processed')
async def test_contextmanager(self, mock_item_processed: MagicMock):
queue = Queue()
item = 'foo'
queue.put_nowait(item)
async with queue as item_from_queue:
self.assertEqual(item, item_from_queue)
mock_item_processed.assert_not_called()
mock_item_processed.assert_called_once_with()

View File

@ -1,324 +0,0 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.session` module.
"""
import json
from argparse import ArgumentError, Namespace
from unittest import IsolatedAsyncioTestCase
from unittest.mock import AsyncMock, MagicMock, patch, call
from asyncio_taskpool import session
from asyncio_taskpool.constants import CLIENT_INFO, CMD, SESSION_MSG_BYTES, SESSION_WRITER
from asyncio_taskpool.exceptions import HelpRequested, NotATaskPool, UnknownTaskPoolClass
from asyncio_taskpool.pool import BaseTaskPool, TaskPool, SimpleTaskPool
FOO, BAR = 'foo', 'bar'
class ControlServerTestCase(IsolatedAsyncioTestCase):
log_lvl: int
@classmethod
def setUpClass(cls) -> None:
cls.log_lvl = session.log.level
session.log.setLevel(999)
@classmethod
def tearDownClass(cls) -> None:
session.log.setLevel(cls.log_lvl)
def setUp(self) -> None:
self.mock_pool = MagicMock(spec=SimpleTaskPool(AsyncMock()))
self.mock_client_class_name = FOO + BAR
self.mock_server = MagicMock(pool=self.mock_pool,
client_class_name=self.mock_client_class_name)
self.mock_reader = MagicMock()
self.mock_writer = MagicMock()
self.session = session.ControlSession(self.mock_server, self.mock_reader, self.mock_writer)
def test_init(self):
self.assertEqual(self.mock_server, self.session._control_server)
self.assertEqual(self.mock_pool, self.session._pool)
self.assertEqual(self.mock_client_class_name, self.session._client_class_name)
self.assertEqual(self.mock_reader, self.session._reader)
self.assertEqual(self.mock_writer, self.session._writer)
self.assertIsNone(self.session._parser)
self.assertIsNone(self.session._subparsers)
def test__add_command(self):
expected_output = 123456
mock_add_parser = MagicMock(return_value=expected_output)
self.session._subparsers = MagicMock(add_parser=mock_add_parser)
self.session._parser = MagicMock()
name, prog, short_help, long_help = 'abc', None, 'short123', None
kwargs = {'x': 1, 'y': 2}
output = self.session._add_command(name, prog, short_help, long_help, **kwargs)
self.assertEqual(expected_output, output)
mock_add_parser.assert_called_once_with(name, prog=name, help=short_help, description=short_help,
parent=self.session._parser, **kwargs)
mock_add_parser.reset_mock()
prog, long_help = 'ffffff', 'so long, wow'
output = self.session._add_command(name, prog, short_help, long_help, **kwargs)
self.assertEqual(expected_output, output)
mock_add_parser.assert_called_once_with(name, prog=prog, help=short_help, description=long_help,
parent=self.session._parser, **kwargs)
mock_add_parser.reset_mock()
short_help = None
output = self.session._add_command(name, prog, short_help, long_help, **kwargs)
self.assertEqual(expected_output, output)
mock_add_parser.assert_called_once_with(name, prog=prog, help=long_help, description=long_help,
parent=self.session._parser, **kwargs)
@patch.object(session, 'get_first_doc_line')
@patch.object(session.ControlSession, '_add_command')
def test__adding_commands(self, mock__add_command: MagicMock, mock_get_first_doc_line: MagicMock):
self.assertIsNone(self.session._add_base_commands())
mock__add_command.assert_called()
mock_get_first_doc_line.assert_called()
mock__add_command.reset_mock()
mock_get_first_doc_line.reset_mock()
self.assertIsNone(self.session._add_simple_commands())
mock__add_command.assert_called()
mock_get_first_doc_line.assert_called()
with self.assertRaises(NotImplementedError):
self.session._add_advanced_commands()
@patch.object(session.ControlSession, '_add_simple_commands')
@patch.object(session.ControlSession, '_add_advanced_commands')
@patch.object(session.ControlSession, '_add_base_commands')
@patch.object(session, 'CommandParser')
def test__init_parser(self, mock_command_parser_cls: MagicMock, mock__add_base_commands: MagicMock,
mock__add_advanced_commands: MagicMock, mock__add_simple_commands: MagicMock):
mock_command_parser_cls.return_value = mock_parser = MagicMock()
self.session._pool = TaskPool()
width = 1234
expected_parser_kwargs = {
'prog': '',
SESSION_WRITER: self.mock_writer,
CLIENT_INFO.TERMINAL_WIDTH: width,
}
self.assertIsNone(self.session._init_parser(width))
mock_command_parser_cls.assert_called_once_with(**expected_parser_kwargs)
mock_parser.add_subparsers.assert_called_once_with(title="Commands", dest=CMD.CMD)
mock__add_base_commands.assert_called_once_with()
mock__add_advanced_commands.assert_called_once_with()
mock__add_simple_commands.assert_not_called()
mock_command_parser_cls.reset_mock()
mock_parser.add_subparsers.reset_mock()
mock__add_base_commands.reset_mock()
mock__add_advanced_commands.reset_mock()
mock__add_simple_commands.reset_mock()
async def fake_coroutine(): pass
self.session._pool = SimpleTaskPool(fake_coroutine)
self.assertIsNone(self.session._init_parser(width))
mock_command_parser_cls.assert_called_once_with(**expected_parser_kwargs)
mock_parser.add_subparsers.assert_called_once_with(title="Commands", dest=CMD.CMD)
mock__add_base_commands.assert_called_once_with()
mock__add_advanced_commands.assert_not_called()
mock__add_simple_commands.assert_called_once_with()
mock_command_parser_cls.reset_mock()
mock_parser.add_subparsers.reset_mock()
mock__add_base_commands.reset_mock()
mock__add_advanced_commands.reset_mock()
mock__add_simple_commands.reset_mock()
class FakeTaskPool(BaseTaskPool):
pass
self.session._pool = FakeTaskPool()
with self.assertRaises(UnknownTaskPoolClass):
self.session._init_parser(width)
mock_command_parser_cls.assert_called_once_with(**expected_parser_kwargs)
mock_parser.add_subparsers.assert_called_once_with(title="Commands", dest=CMD.CMD)
mock__add_base_commands.assert_called_once_with()
mock__add_advanced_commands.assert_not_called()
mock__add_simple_commands.assert_not_called()
mock_command_parser_cls.reset_mock()
mock_parser.add_subparsers.reset_mock()
mock__add_base_commands.reset_mock()
mock__add_advanced_commands.reset_mock()
mock__add_simple_commands.reset_mock()
self.session._pool = MagicMock()
with self.assertRaises(NotATaskPool):
self.session._init_parser(width)
mock_command_parser_cls.assert_called_once_with(**expected_parser_kwargs)
mock_parser.add_subparsers.assert_called_once_with(title="Commands", dest=CMD.CMD)
mock__add_base_commands.assert_called_once_with()
mock__add_advanced_commands.assert_not_called()
mock__add_simple_commands.assert_not_called()
@patch.object(session.ControlSession, '_init_parser')
async def test_client_handshake(self, mock__init_parser: MagicMock):
width = 5678
msg = ' ' + json.dumps({CLIENT_INFO.TERMINAL_WIDTH: width, FOO: BAR}) + ' '
mock_read = AsyncMock(return_value=msg.encode())
self.mock_reader.read = mock_read
self.mock_writer.drain = AsyncMock()
self.assertIsNone(await self.session.client_handshake())
mock_read.assert_awaited_once_with(SESSION_MSG_BYTES)
mock__init_parser.assert_called_once_with(width)
self.mock_writer.write.assert_called_once_with(str(self.mock_pool).encode())
self.mock_writer.drain.assert_awaited_once_with()
@patch.object(session, 'return_or_exception')
async def test__write_function_output(self, mock_return_or_exception: MagicMock):
self.mock_writer.write = MagicMock()
mock_return_or_exception.return_value = None
func, args, kwargs = MagicMock(), (1, 2, 3), {'a': 'A', 'b': 'B'}
self.assertIsNone(await self.session._write_function_output(func, *args, **kwargs))
mock_return_or_exception.assert_called_once_with(func, *args, **kwargs)
self.mock_writer.write.assert_called_once_with(b"ok")
mock_return_or_exception.reset_mock()
self.mock_writer.write.reset_mock()
mock_return_or_exception.return_value = output = MagicMock()
self.assertIsNone(await self.session._write_function_output(func, *args, **kwargs))
mock_return_or_exception.assert_called_once_with(func, *args, **kwargs)
self.mock_writer.write.assert_called_once_with(str(output).encode())
@patch.object(session.ControlSession, '_write_function_output')
async def test__cmd_name(self, mock__write_function_output: AsyncMock):
self.assertIsNone(await self.session._cmd_name())
mock__write_function_output.assert_awaited_once_with(self.mock_pool.__class__.__str__, self.session._pool)
@patch.object(session.ControlSession, '_write_function_output')
async def test__cmd_pool_size(self, mock__write_function_output: AsyncMock):
num = 12345
kwargs = {session.NUM: num, FOO: BAR}
self.assertIsNone(await self.session._cmd_pool_size(**kwargs))
mock__write_function_output.assert_awaited_once_with(
self.mock_pool.__class__.pool_size.fset, self.session._pool, num
)
mock__write_function_output.reset_mock()
kwargs.pop(session.NUM)
self.assertIsNone(await self.session._cmd_pool_size(**kwargs))
mock__write_function_output.assert_awaited_once_with(
self.mock_pool.__class__.pool_size.fget, self.session._pool
)
@patch.object(session.ControlSession, '_write_function_output')
async def test__cmd_num_running(self, mock__write_function_output: AsyncMock):
self.assertIsNone(await self.session._cmd_num_running())
mock__write_function_output.assert_awaited_once_with(
self.mock_pool.__class__.num_running.fget, self.session._pool
)
@patch.object(session.ControlSession, '_write_function_output')
async def test__cmd_start(self, mock__write_function_output: AsyncMock):
num = 12345
kwargs = {session.NUM: num, FOO: BAR}
self.assertIsNone(await self.session._cmd_start(**kwargs))
mock__write_function_output.assert_awaited_once_with(self.mock_pool.start, num)
@patch.object(session.ControlSession, '_write_function_output')
async def test__cmd_stop(self, mock__write_function_output: AsyncMock):
num = 12345
kwargs = {session.NUM: num, FOO: BAR}
self.assertIsNone(await self.session._cmd_stop(**kwargs))
mock__write_function_output.assert_awaited_once_with(self.mock_pool.stop, num)
@patch.object(session.ControlSession, '_write_function_output')
async def test__cmd_stop_all(self, mock__write_function_output: AsyncMock):
self.assertIsNone(await self.session._cmd_stop_all())
mock__write_function_output.assert_awaited_once_with(self.mock_pool.stop_all)
@patch.object(session.ControlSession, '_write_function_output')
async def test__cmd_func_name(self, mock__write_function_output: AsyncMock):
self.assertIsNone(await self.session._cmd_func_name())
mock__write_function_output.assert_awaited_once_with(
self.mock_pool.__class__.func_name.fget, self.session._pool
)
async def test__execute_command(self):
mock_method = AsyncMock()
cmd = 'this-is-a-test'
setattr(self.session, '_cmd_' + cmd.replace('-', '_'), mock_method)
kwargs = {FOO: BAR, 'hello': 'python'}
self.assertIsNone(await self.session._execute_command(**{CMD.CMD: cmd}, **kwargs))
mock_method.assert_awaited_once_with(**kwargs)
@patch.object(session.ControlSession, '_execute_command')
async def test__parse_command(self, mock__execute_command: AsyncMock):
msg = 'asdf asd as a'
kwargs = {FOO: BAR, 'hello': 'python'}
mock_parse_args = MagicMock(return_value=Namespace(**kwargs))
self.session._parser = MagicMock(parse_args=mock_parse_args)
self.mock_writer.write = MagicMock()
self.assertIsNone(await self.session._parse_command(msg))
mock_parse_args.assert_called_once_with(msg.split(' '))
self.mock_writer.write.assert_not_called()
mock__execute_command.assert_awaited_once_with(**kwargs)
mock__execute_command.reset_mock()
mock_parse_args.reset_mock()
mock_parse_args.side_effect = exc = ArgumentError(MagicMock(), "oops")
self.assertIsNone(await self.session._parse_command(msg))
mock_parse_args.assert_called_once_with(msg.split(' '))
self.mock_writer.write.assert_called_once_with(str(exc).encode())
mock__execute_command.assert_not_awaited()
self.mock_writer.write.reset_mock()
mock_parse_args.reset_mock()
mock_parse_args.side_effect = HelpRequested()
self.assertIsNone(await self.session._parse_command(msg))
mock_parse_args.assert_called_once_with(msg.split(' '))
self.mock_writer.write.assert_not_called()
mock__execute_command.assert_not_awaited()
@patch.object(session.ControlSession, '_parse_command')
async def test_listen(self, mock__parse_command: AsyncMock):
def make_reader_return_empty():
self.mock_reader.read.return_value = b''
self.mock_writer.drain = AsyncMock(side_effect=make_reader_return_empty)
msg = "fascinating"
self.mock_reader.read = AsyncMock(return_value=f' {msg} '.encode())
self.assertIsNone(await self.session.listen())
self.mock_reader.read.assert_has_awaits([call(SESSION_MSG_BYTES), call(SESSION_MSG_BYTES)])
mock__parse_command.assert_awaited_once_with(msg)
self.mock_writer.drain.assert_awaited_once_with()
self.mock_reader.read.reset_mock()
mock__parse_command.reset_mock()
self.mock_writer.drain.reset_mock()
self.mock_server.is_serving = MagicMock(return_value=False)
self.assertIsNone(await self.session.listen())
self.mock_reader.read.assert_not_awaited()
mock__parse_command.assert_not_awaited()
self.mock_writer.drain.assert_not_awaited()

View File

@ -1,134 +0,0 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.session_parser` module.
"""
from argparse import Action, ArgumentParser, HelpFormatter, ArgumentDefaultsHelpFormatter, RawTextHelpFormatter
from unittest import IsolatedAsyncioTestCase
from unittest.mock import MagicMock, patch
from asyncio_taskpool import session_parser
from asyncio_taskpool.constants import SESSION_WRITER, CLIENT_INFO
from asyncio_taskpool.exceptions import HelpRequested
FOO = 'foo'
class ControlServerTestCase(IsolatedAsyncioTestCase):
def setUp(self) -> None:
self.help_formatter_factory_patcher = patch.object(session_parser.CommandParser, 'help_formatter_factory')
self.mock_help_formatter_factory = self.help_formatter_factory_patcher.start()
self.mock_help_formatter_factory.return_value = RawTextHelpFormatter
self.session_writer, self.terminal_width = MagicMock(), 420
self.kwargs = {
SESSION_WRITER: self.session_writer,
CLIENT_INFO.TERMINAL_WIDTH: self.terminal_width,
session_parser.FORMATTER_CLASS: FOO
}
self.parser = session_parser.CommandParser(**self.kwargs)
def tearDown(self) -> None:
self.help_formatter_factory_patcher.stop()
def test_help_formatter_factory(self):
self.help_formatter_factory_patcher.stop()
class MockBaseClass(HelpFormatter):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
terminal_width = 123456789
cls = session_parser.CommandParser.help_formatter_factory(terminal_width, MockBaseClass)
self.assertTrue(issubclass(cls, MockBaseClass))
instance = cls('prog')
self.assertEqual(terminal_width, getattr(instance, '_width'))
cls = session_parser.CommandParser.help_formatter_factory(terminal_width)
self.assertTrue(issubclass(cls, ArgumentDefaultsHelpFormatter))
instance = cls('prog')
self.assertEqual(terminal_width, getattr(instance, '_width'))
def test_init(self):
self.assertIsInstance(self.parser, ArgumentParser)
self.assertEqual(self.session_writer, self.parser._session_writer)
self.assertEqual(self.terminal_width, self.parser._terminal_width)
self.mock_help_formatter_factory.assert_called_once_with(self.terminal_width, FOO)
self.assertFalse(getattr(self.parser, 'exit_on_error'))
self.assertEqual(RawTextHelpFormatter, getattr(self.parser, 'formatter_class'))
def test_session_writer(self):
self.assertEqual(self.session_writer, self.parser.session_writer)
def test_terminal_width(self):
self.assertEqual(self.terminal_width, self.parser.terminal_width)
def test__print_message(self):
self.session_writer.write = MagicMock()
self.assertIsNone(self.parser._print_message(''))
self.session_writer.write.assert_not_called()
msg = 'foo bar baz'
self.assertIsNone(self.parser._print_message(msg))
self.session_writer.write.assert_called_once_with(msg.encode())
@patch.object(session_parser.CommandParser, '_print_message')
def test_exit(self, mock__print_message: MagicMock):
self.assertIsNone(self.parser.exit(123, ''))
mock__print_message.assert_not_called()
msg = 'foo bar baz'
self.assertIsNone(self.parser.exit(123, msg))
mock__print_message.assert_called_once_with(msg)
@patch.object(session_parser.ArgumentParser, 'print_help')
def test_print_help(self, mock_print_help: MagicMock):
arg = MagicMock()
with self.assertRaises(HelpRequested):
self.parser.print_help(arg)
mock_print_help.assert_called_once_with(arg)
def test_add_optional_num_argument(self):
metavar = 'FOOBAR'
action = self.parser.add_optional_num_argument(metavar=metavar)
self.assertIsInstance(action, Action)
self.assertEqual('?', action.nargs)
self.assertEqual(1, action.default)
self.assertEqual(int, action.type)
self.assertEqual(metavar, action.metavar)
num = 111
kwargs = vars(self.parser.parse_args([f'{num}']))
self.assertDictEqual({session_parser.NUM: num}, kwargs)
name = f'--{FOO}'
nargs = '+'
default = 1
_type = float
required = True
dest = 'foo_bar'
action = self.parser.add_optional_num_argument(name, nargs=nargs, default=default, type=_type,
required=required, metavar=metavar, dest=dest)
self.assertIsInstance(action, Action)
self.assertEqual(nargs, action.nargs)
self.assertEqual(default, action.default)
self.assertEqual(_type, action.type)
self.assertEqual(required, action.required)
self.assertEqual(metavar, action.metavar)
self.assertEqual(dest, action.dest)
kwargs = vars(self.parser.parse_args([f'{num}', name, '1', '1.5']))
self.assertDictEqual({session_parser.NUM: num, dest: [1.0, 1.5]}, kwargs)

View File

@ -1,12 +1,18 @@
# Using `asyncio-taskpool` # Using `asyncio-taskpool`
## Contents
- [Contents](#contents)
- [Minimal example for `SimpleTaskPool`](#minimal-example-for-simpletaskpool)
- [Advanced example for `TaskPool`](#advanced-example-for-taskpool)
- [Control server example](#control-server-example)
## Minimal example for `SimpleTaskPool` ## Minimal example for `SimpleTaskPool`
With a `SimpleTaskPool` the function to execute as well as the arguments with which to execute it must be defined during its initialization (and they cannot be changed later). The only control you have after initialization is how many of such tasks are being run.
The minimum required setup is a "worker" coroutine function that can do something asynchronously, and a main coroutine function that sets up the `SimpleTaskPool`, starts/stops the tasks as desired, and eventually awaits them all. The minimum required setup is a "worker" coroutine function that can do something asynchronously, and a main coroutine function that sets up the `SimpleTaskPool`, starts/stops the tasks as desired, and eventually awaits them all.
The following demo code enables full log output first for additional clarity. It is complete and should work as is. The following demo script enables full log output first for additional clarity. It is complete and should work as is.
### Code
```python ```python
import logging import logging
@ -32,12 +38,12 @@ async def work(n: int) -> None:
async def main() -> None: async def main() -> None:
pool = SimpleTaskPool(work, (5,)) # initializes the pool; no work is being done yet pool = SimpleTaskPool(work, args=(5,)) # initializes the pool; no work is being done yet
await pool.start(3) # launches work tasks 0, 1, and 2 await pool.start(3) # launches work tasks 0, 1, and 2
await asyncio.sleep(1.5) # lets the tasks work for a bit await asyncio.sleep(1.5) # lets the tasks work for a bit
await pool.start() # launches work task 3 await pool.start() # launches work task 3
await asyncio.sleep(1.5) # lets the tasks work for a bit await asyncio.sleep(1.5) # lets the tasks work for a bit
pool.stop(2) # cancels tasks 3 and 2 pool.stop(2) # cancels tasks 3 and 2 (LIFO order)
pool.lock() # required for the last line pool.lock() # required for the last line
await pool.gather_and_close() # awaits all tasks, then flushes the pool await pool.gather_and_close() # awaits all tasks, then flushes the pool
@ -46,7 +52,9 @@ if __name__ == '__main__':
asyncio.run(main()) asyncio.run(main())
``` ```
### Output <details>
<summary>Output: (Click to expand)</summary>
``` ```
SimpleTaskPool-0 initialized SimpleTaskPool-0 initialized
Started SimpleTaskPool-0_Task-0 Started SimpleTaskPool-0_Task-0
@ -76,6 +84,7 @@ Ended SimpleTaskPool-0_Task-1
> did 4 > did 4
> did 4 > did 4
``` ```
</details>
## Advanced example for `TaskPool` ## Advanced example for `TaskPool`
@ -83,9 +92,7 @@ This time, we want to start tasks from _different_ coroutine functions **and** w
As with the simple example, we need "worker" coroutine functions that can do something asynchronously, as well as a main coroutine function that sets up the pool, starts the tasks, and eventually awaits them. As with the simple example, we need "worker" coroutine functions that can do something asynchronously, as well as a main coroutine function that sets up the pool, starts the tasks, and eventually awaits them.
The following demo code enables full log output first for additional clarity. It is complete and should work as is. The following demo script enables full log output first for additional clarity. It is complete and should work as is.
### Code
```python ```python
import logging import logging
@ -114,19 +121,19 @@ async def other_work(a: int, b: int) -> None:
async def main() -> None: async def main() -> None:
# Initialize a new task pool instance and limit its size to 3 tasks. # Initialize a new task pool instance and limit its size to 3 tasks.
pool = TaskPool(3) pool = TaskPool(3)
# Queue up two tasks (IDs 0 and 1) to run concurrently (with the same positional arguments). # Queue up two tasks (IDs 0 and 1) to run concurrently (with the same keyword-arguments).
print("> Called `apply`") print("> Called `apply`")
await pool.apply(work, kwargs={'start': 100, 'stop': 200, 'step': 10}, num=2) await pool.apply(work, kwargs={'start': 100, 'stop': 200, 'step': 10}, num=2)
# Let the tasks work for a bit. # Let the tasks work for a bit.
await asyncio.sleep(1.5) await asyncio.sleep(1.5)
# Now, let us enqueue four more tasks (which will receive IDs 2, 3, 4, and 5), each created with different # Now, let us enqueue four more tasks (which will receive IDs 2, 3, 4, and 5), each created with different
# positional arguments by using `starmap`, but have **no more than two of those** run concurrently. # positional arguments by using `starmap`, but we want no more than two of those to run concurrently.
# Since we set our pool size to 3, and already have two tasks working within the pool, # Since we set our pool size to 3, and already have two tasks working within the pool,
# only the first one of these will start immediately (and receive ID 2). # only the first one of these will start immediately (and receive ID 2).
# The second one will start (with ID 3), only once there is room in the pool, # The second one will start (with ID 3), only once there is room in the pool,
# which -- in this example -- will be the case after ID 2 ends. # which -- in this example -- will be the case after ID 2 ends.
# Once there is room in the pool again, the third and fourth will each start (with IDs 4 and 5) # Once there is room in the pool again, the third and fourth will each start (with IDs 4 and 5)
# **only** once there is room in the pool **and** no more than one other task of these new ones is running. # only once there is room in the pool and no more than one other task of these new ones is running.
args_list = [(0, 10), (10, 20), (20, 30), (30, 40)] args_list = [(0, 10), (10, 20), (20, 30), (30, 40)]
await pool.starmap(other_work, args_list, group_size=2) await pool.starmap(other_work, args_list, group_size=2)
print("> Called `starmap`") print("> Called `starmap`")
@ -142,10 +149,9 @@ if __name__ == '__main__':
asyncio.run(main()) asyncio.run(main())
``` ```
### Output <details>
Additional comments for the output are provided with `<---` next to the output lines. <summary>Output: (Click to expand)</summary>
(Keep in mind that the logger and `print` asynchronously write to `stdout`.)
``` ```
TaskPool-0 initialized TaskPool-0 initialized
Started TaskPool-0_Task-0 Started TaskPool-0_Task-0
@ -227,4 +233,37 @@ Ended TaskPool-0_Task-5
> Done. > Done.
``` ```
(Added comments with `<---` next to the output lines.)
Keep in mind that the logger and `print` asynchronously write to `stdout`, so the order of lines in your output may be slightly different.
</details>
## Control server example
One of the main features of `asyncio-taskpool` is the ability to control a task pool "from the outside" at runtime.
The [example_server.py](./example_server.py) script launches a couple of worker tasks within a `SimpleTaskPool` instance and then starts a `TCPControlServer` instance for that task pool. The server is configured to locally bind to port `9999` and is stopped automatically after the "work" is done.
To run the script:
```shell
python usage/example_server.py
```
You can then connect to the server via the command line interface:
```shell
python -m asyncio_taskpool.control tcp localhost 9999
```
The CLI starts a `TCPControlClient` that connects to our example server. Once the connection is established, it gives you an input prompt allowing you to issue commands to the task pool:
```
Connected to SimpleTaskPool-0
Type '-h' to get help and usage instructions for all available commands.
>
```
It may be useful to run the server script and the client interface in two separate terminal windows side by side. The server script is configured with a verbose logger and will react to any commands issued by the client with detailed log messages in the terminal.
---
© 2022 Daniil Fajnberg © 2022 Daniil Fajnberg

View File

@ -15,7 +15,7 @@ You should have received a copy of the GNU Lesser General Public License along w
If not, see <https://www.gnu.org/licenses/>.""" If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """ __doc__ = """
Working example of a UnixControlServer in combination with the SimpleTaskPool. Working example of a TCPControlServer in combination with the SimpleTaskPool.
Use the main CLI client to interface at the socket. Use the main CLI client to interface at the socket.
""" """
@ -23,8 +23,9 @@ Use the main CLI client to interface at the socket.
import asyncio import asyncio
import logging import logging
from asyncio_taskpool import SimpleTaskPool, UnixControlServer from asyncio_taskpool import SimpleTaskPool
from asyncio_taskpool.constants import PACKAGE_NAME from asyncio_taskpool.control import TCPControlServer
from asyncio_taskpool.internals.constants import PACKAGE_NAME
logging.getLogger().setLevel(logging.NOTSET) logging.getLogger().setLevel(logging.NOTSET)
@ -34,11 +35,11 @@ logging.getLogger(PACKAGE_NAME).addHandler(logging.StreamHandler())
async def work(item: int) -> None: async def work(item: int) -> None:
"""The non-blocking sleep simulates something like an I/O operation that can be done asynchronously.""" """The non-blocking sleep simulates something like an I/O operation that can be done asynchronously."""
await asyncio.sleep(1) await asyncio.sleep(1)
print("worked on", item) print("worked on", item, flush=True)
async def worker(q: asyncio.Queue) -> None: async def worker(q: asyncio.Queue) -> None:
"""Simulates doing asynchronous work that takes a little bit of time to finish.""" """Simulates doing asynchronous work that takes a bit of time to finish."""
# We only want the worker to stop, when its task is cancelled; therefore we start an infinite loop. # We only want the worker to stop, when its task is cancelled; therefore we start an infinite loop.
while True: while True:
# We want to block here, until we can get the next item from the queue. # We want to block here, until we can get the next item from the queue.
@ -65,18 +66,18 @@ async def main() -> None:
# We just put some integers into our queue, since all our workers actually do, is print an item and sleep for a bit. # We just put some integers into our queue, since all our workers actually do, is print an item and sleep for a bit.
for item in range(100): for item in range(100):
q.put_nowait(item) q.put_nowait(item)
pool = SimpleTaskPool(worker, (q,)) # initializes the pool pool = SimpleTaskPool(worker, args=(q,)) # initializes the pool
await pool.start(3) # launches three worker tasks await pool.start(3) # launches three worker tasks
control_server_task = await UnixControlServer(pool, path='/tmp/py_asyncio_taskpool.sock').serve_forever() control_server_task = await TCPControlServer(pool, host='127.0.0.1', port=9999).serve_forever()
# We block until `.task_done()` has been called once by our workers for every item placed into the queue. # We block until `.task_done()` has been called once by our workers for every item placed into the queue.
await q.join() await q.join()
# Since we don't need any "work" done anymore, we can lock our control server by cancelling the task. # Since we don't need any "work" done anymore, we can get rid of our control server by cancelling the task.
control_server_task.cancel() control_server_task.cancel()
# Since our workers should now be stuck waiting for more items to pick from the queue, but no items are left, # Since our workers should now be stuck waiting for more items to pick from the queue, but no items are left,
# we can now safely cancel their tasks. # we can now safely cancel their tasks.
pool.lock() pool.lock()
pool.stop_all() pool.stop_all()
# Finally we allow for all tasks to do do their cleanup, if they need to do any, upon being cancelled. # Finally, we allow for all tasks to do their cleanup (as if they need to do any) upon being cancelled.
# We block until they all return or raise an exception, but since we are not interested in any of their exceptions, # We block until they all return or raise an exception, but since we are not interested in any of their exceptions,
# we just silently collect their exceptions along with their return values. # we just silently collect their exceptions along with their return values.
await pool.gather_and_close(return_exceptions=True) await pool.gather_and_close(return_exceptions=True)