Compare commits

...

74 Commits

Author SHA1 Message Date
73aa93a9b7
Fix unit tests 2022-05-08 15:26:17 +02:00
051d0cb911
Fix control server response writes 2022-05-08 15:20:07 +02:00
ee0b8c0002
End control server response with newline 2022-05-07 16:16:50 +02:00
28c997e0ee
Fix client and session tests 2022-05-07 14:54:33 +02:00
5a72a6d1d1
Fix server's control session reading from socket; improve docstrings 2022-05-07 14:44:43 +02:00
72e380cd77
Version increment 2022-05-06 18:57:45 +02:00
85672bddeb
Add short docstring to until_closed method 2022-05-05 08:46:27 +02:00
dae883446a
Add until_closed method to pools 2022-05-05 08:35:32 +02:00
a4ecf39157
Fix socket path bug in control client 2022-04-27 17:02:34 +02:00
e3bbb05eac
Fix version number 2022-04-18 13:49:40 +02:00
36527ccffc
Improve exception handling in _start_num 2022-04-18 13:45:15 +02:00
d047b99119
Fix cancel message bug for Python 3.8; test coverage workaround for Python version conditions 2022-04-10 10:43:53 +02:00
d7cd16c540
Add docs on IDs and groups 2022-04-09 14:53:07 +02:00
db306a1a1f
Fixed Python 3.8 compatibility bugs; classmethod+property workaround; control session buffer 2022-04-08 11:53:53 +02:00
3a8fcb2d5a
Optimize coverage script/settings 2022-04-06 21:47:39 +02:00
cf02206588
Add RtD config 2022-04-04 10:06:39 +02:00
0796038dcd
Update Readme 2022-04-03 16:52:51 +02:00
f4e33baf82
Add workflows & badges 2022-04-03 16:33:28 +02:00
9b838b6130
improved exception handling/logging in _map 2022-03-31 21:05:01 +02:00
0daed04167
improved exception handling/logging in apply meta task 2022-03-31 11:04:31 +02:00
80fc91ec47
made start "non-async" using meta task 2022-03-30 21:51:19 +02:00
a72a7cc516
made a bunch of methods "non-async" 2022-03-30 18:16:28 +02:00
91d546ebc2
moved the entire meta task logic to the base class 2022-03-30 16:17:34 +02:00
5b3ac52bf6
start method now returns group name 2022-03-30 15:31:38 +02:00
82e6ca7b1a
task group naming logic changed 2022-03-30 12:37:32 +02:00
153127e028
lock before gathering meta tasks 2022-03-30 11:47:15 +02:00
17539e9c27
made apply non-blocking by using a meta-task 2022-03-29 20:01:44 +02:00
1beb9fc9b0
gather_and_close now automatically locks the pool 2022-03-29 19:43:21 +02:00
23a4cb028a
renamed group_size to num_concurrent in _map 2022-03-29 19:34:27 +02:00
54e5bfa8a0
drastically simplified meta-task internals 2022-03-29 19:34:13 +02:00
0e7e92a91b
better error handling when converting arguments 2022-03-26 10:29:34 +01:00
a9011076c4
fixed potential race cond. gathering meta tasks 2022-03-25 12:58:18 +01:00
7e34aa106d
sphinx documentation; adjusted all docstrings; moved some modules to non-public subpackage 2022-03-24 13:38:30 +01:00
4c6a5412ca removed num_cancellations and num_finished from the pool interface; added num_cancelled; made the num argument non-optional in the start and stop methods of the SimpleTaskPool; changed some internals; improved docstrings 2022-03-17 13:52:02 +01:00
44c03cc493 catching exceptions in _queue_consumer meta task 2022-03-16 16:57:03 +01:00
689a74c678 control interface now supports TaskPool instances:
dotted paths to coroutine functions can be passed to the parser as arguments for methods like `map`;
parser supports literal evaluation for the argument iterables in methods like `map`;
minor fixes
2022-03-16 11:27:27 +01:00
3503c0bf44 removed a few functions from the public API; fixed some docstrings/comments 2022-03-14 19:16:28 +01:00
3d104c979e typo 2022-03-14 18:13:51 +01:00
a92e646411 improved and extended usage/readme docs; small fixes 2022-03-14 18:09:30 +01:00
3d84e1552b version bump 2022-03-13 16:12:04 +01:00
38f4ec1b06 finally reached 100% unittest coverage overall 2022-03-13 16:11:20 +01:00
6f082288d8 brought unittest coverage up to 100% on control modules 2022-03-13 15:44:53 +01:00
9fde231250 moved control-related modules to a sub-package; minor corrections 2022-03-13 15:18:53 +01:00
c72a5035ea big rework of the session-parser-interaction;
dynamically adding pool methods/properties as parser commands;
dynamically executing selected pool method/property;
greatly simplified `ControlSession` class;
removed the need for hard-coded command names;
adjusted unittests accordingly
2022-03-13 14:56:56 +01:00
eb152e4d75 fixed unix server/client tests 2022-03-08 10:22:07 +01:00
d05f84b2c3 additional base commands for control server 2022-03-08 10:15:10 +01:00
7c66604ad0 renamed method get_task_group_ids and extended to accept any number of group names 2022-03-08 09:08:28 +01:00
287906a218 added unittests 2022-03-08 09:05:59 +01:00
ce0f9a1f65 version bump 2022-02-25 22:44:25 +01:00
5dad4ab0c7 implemented TCP socket control; switched example to TCP 2022-02-25 22:42:37 +01:00
ae6bb1bd17 skipping unix socket tests on Windows 2022-02-25 21:17:14 +01:00
e501a849f3 clarifications and corrections 2022-02-25 19:57:54 +01:00
ed6badb088 moved imports for unix socket connections to init methods of client and server 2022-02-25 19:09:28 +01:00
c63f079da4 huge update:
introduced meta tasks which are used by `_map()`;
introduced task groups;
ending with `gather_and_close()` now;
pool unittests rewritten accordingly;
two new helper classes
2022-02-24 19:16:24 +01:00
4994135062 renamed num_tasks in the map-methods to group_size; reworded/extended the docstrings 2022-02-19 16:02:50 +01:00
d0c0177681 full unit test coverage and docstrings for client module; minor refactoring 2022-02-19 12:56:08 +01:00
bac7b32342 full unit test coverage and docstrings for session_parser module; minor changes 2022-02-14 22:11:31 +01:00
96d01e7259 full unit test coverage and docstrings for session module 2022-02-14 17:59:11 +01:00
3f3eb7ce38 minor rephrasing 2022-02-13 21:12:35 +01:00
05d51eface simplified method 2022-02-13 20:58:36 +01:00
b6aed727e9 additional unit tests 2022-02-13 19:55:27 +01:00
c9a8d9ecd1 better placeholders 2022-02-13 19:45:53 +01:00
538b9cc91c major refactoring of the control session/parser classes; restructured constants 2022-02-13 19:39:21 +01:00
3fb451a00e docstrings and full test coverage for server module; small adjustments 2022-02-13 16:17:50 +01:00
be03097bf4 massive overhaul of the control server interface;
making use of `ArgumentParser` for client commands;
new `ControlSession` object instantiated upon connection
2022-02-12 22:51:52 +01:00
024e5db0d4 license & copyright notices; docstrings for each module; extended readme 2022-02-09 23:14:42 +01:00
bc9d2f243e renamed "closing" a pool to "locking" it 2022-02-08 23:09:33 +01:00
012c8ac639 testing notes in readme 2022-02-08 16:43:44 +01:00
8fd40839ee full unittest coverage for the helpers module 2022-02-08 16:40:13 +01:00
36d026f433 more precise comments 2022-02-08 16:24:37 +01:00
410e73e68b bugfix for TaskPool._map; usage example for TaskPool 2022-02-08 16:15:55 +01:00
727f0b7c8b added full test coverage and docstrings for TaskPool as well as minor improvements 2022-02-08 14:30:42 +01:00
63aab1a8f6 added full test coverage of TaskPool and further improved its _map() method 2022-02-08 13:29:57 +01:00
d48b20818f small improvement to the TaskPool._map args queue; added docstring to helper method 2022-02-08 12:23:16 +01:00
62 changed files with 6257 additions and 904 deletions

View File

@ -1,12 +1,12 @@
[run] [run]
source = src/ source = src/
branch = true branch = true
omit = command_line = -m unittest discover
.venv/*
[report] [report]
fail_under = 100 fail_under = 100
show_missing = True show_missing = True
skip_covered = False skip_covered = False
omit = exclude_lines =
tests/* if TYPE_CHECKING:
if __name__ == ['"]__main__['"]:

88
.github/workflows/main.yaml vendored Normal file
View File

@ -0,0 +1,88 @@
name: CI
on:
push:
branches: [master]
jobs:
tests:
name: Python ${{ matrix.python-version }} Tests
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- '3.8'
- '3.9'
- '3.10'
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
cache: 'pip'
cache-dependency-path: 'requirements/dev.txt'
- name: Upgrade packaging tools
run: pip install -U pip
- name: Install dependencies
run: pip install -U -r requirements/dev.txt
- name: Install asyncio-taskpool
run: pip install -e .
- name: Run tests for Python ${{ matrix.python-version }}
if: ${{ matrix.python-version != '3.10' }}
run: python -m tests
- name: Run tests for Python 3.10 and save coverage
if: ${{ matrix.python-version == '3.10' }}
run: echo "coverage=$(./coverage.sh)" >> $GITHUB_ENV
outputs:
coverage: ${{ env.coverage }}
update_badges:
needs: tests
name: Update Badges
env:
meta_gist_id: 3f8240a976e8781a765d9c74a583dcda
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Download `cloc`
run: sudo apt-get update -y && sudo apt-get install -y cloc
- name: Count lines of code/comments
run: |
echo "cloc_code=$(./cloc.sh -c src/)" >> $GITHUB_ENV
echo "cloc_comments=$(./cloc.sh -m src/)" >> $GITHUB_ENV
echo "cloc_commentpercent=$(./cloc.sh -p src/)" >> $GITHUB_ENV
- name: Create badge for lines of code
uses: Schneegans/dynamic-badges-action@v1.2.0
with:
auth: ${{ secrets.GIST_META_DATA }}
gistID: ${{ env.meta_gist_id }}
filename: cloc-code.json
label: Lines of Code
message: ${{ env.cloc_code }}
- name: Create badge for lines of comments
uses: Schneegans/dynamic-badges-action@v1.2.0
with:
auth: ${{ secrets.GIST_META_DATA }}
gistID: ${{ env.meta_gist_id }}
filename: cloc-comments.json
label: Comments
message: ${{ env.cloc_comments }} (${{ env.cloc_commentpercent }}%)
- name: Create badge for test coverage
uses: Schneegans/dynamic-badges-action@v1.2.0
with:
auth: ${{ secrets.GIST_META_DATA }}
gistID: ${{ env.meta_gist_id }}
filename: test-coverage.json
label: Coverage
message: ${{ needs.tests.outputs.coverage }}

3
.gitignore vendored
View File

@ -3,9 +3,10 @@
# IDE settings: # IDE settings:
/.idea/ /.idea/
/.vscode/ /.vscode/
# Distribution / packaging: # Distribution / build files:
*.egg-info/ *.egg-info/
/dist/ /dist/
/docs/build/
# Python cache: # Python cache:
__pycache__/ __pycache__/
# Testing: # Testing:

11
.readthedocs.yaml Normal file
View File

@ -0,0 +1,11 @@
version: 2
build:
os: 'ubuntu-20.04'
tools:
python: '3.8'
python:
install:
- method: pip
path: .
sphinx:
fail_on_warning: true

674
COPYING Normal file
View File

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

165
COPYING.LESSER Normal file
View File

@ -0,0 +1,165 @@
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.

View File

@ -1,19 +1,101 @@
[//]: # (This file is part of asyncio-taskpool.)
[//]: # (asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of)
[//]: # (version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.)
[//]: # (asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;)
[//]: # (without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.)
[//]: # (See the GNU Lesser General Public License for more details.)
[//]: # (You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.)
[//]: # (If not, see <https://www.gnu.org/licenses/>.)
# asyncio-taskpool # asyncio-taskpool
Dynamically manage pools of asyncio tasks [![GitHub last commit][github-last-commit-img]][github-last-commit]
![Lines of code][gist-cloc-code-img]
![Lines of comments][gist-cloc-comments-img]
![Test coverage][gist-test-coverage-img]
[![License: LGPL v3.0][lgpl3-img]][lgpl3]
[![PyPI version][pypi-latest-version-img]][pypi-latest-version]
**Dynamically manage pools of asyncio tasks**
Full documentation available at [RtD](https://asyncio-taskpool.readthedocs.io/en/latest).
---
## Contents
- [Contents](#contents)
- [Summary](#summary)
- [Usage](#usage)
- [Installation](#installation)
- [Dependencies](#dependencies)
- [Testing](#testing)
- [License](#license)
## Summary
A **task pool** is an object with a simple interface for aggregating and dynamically managing asynchronous tasks.
With an interface that is intentionally similar to the [`multiprocessing.Pool`](https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.pool) class from the standard library, the `TaskPool` provides you such methods as `apply`, `map`, and `starmap` to execute coroutines concurrently as [`asyncio.Task`](https://docs.python.org/3/library/asyncio-task.html#task-object) objects. There is no limitation imposed on what kind of tasks can be run or in what combination, when new ones can be added, or when they can be cancelled.
For a more streamlined use-case, the `SimpleTaskPool` provides an even more intuitive and simple interface at the cost of flexibility.
If you need control over a task pool at runtime, you can launch an asynchronous `ControlServer` to be able to interface with the pool from an outside process or via a network, and stop/start tasks within the pool as you wish.
## Usage ## Usage
See [USAGE.md](usage/USAGE.md) Generally speaking, a task is added to a pool by providing it with a coroutine function reference as well as the arguments for that function. Here is what that could look like in the most simplified form:
```python
from asyncio_taskpool import SimpleTaskPool
...
async def work(_foo, _bar): ...
async def main():
pool = SimpleTaskPool(work, args=('xyz', 420))
pool.start(5)
...
pool.stop(3)
...
await pool.gather_and_close()
```
Since one of the main goals of `asyncio-taskpool` is to be able to start/stop tasks dynamically or "on-the-fly", _most_ of the associated methods are non-blocking _most_ of the time. A notable exception is the `gather_and_close` method for awaiting the return of all tasks in the pool. (It is essentially a glorified wrapper around the [`asyncio.gather`](https://docs.python.org/3/library/asyncio-task.html#asyncio.gather) function.)
For working and fully documented demo scripts see [USAGE.md](usage/USAGE.md).
## Installation ## Installation
`pip install asyncio-taskpool` ```shell
pip install asyncio-taskpool
```
## Dependencies ## Dependencies
Python Version 3.8+, tested on Linux Python Version 3.8+, tested on Linux
## Building from source ## Testing
Run `python -m build` Install [`coverage`](https://coverage.readthedocs.io/en/latest/) with `pip`, then execute the [`./coverage.sh`](coverage.sh) shell script to run all unit tests and save the coverage report.
## License
`asyncio-taskpool` is licensed under the **GNU LGPL version 3.0** specifically.
The full license texts for the [GNU GPLv3.0](COPYING) and the [GNU LGPLv3.0](COPYING.LESSER) are included in this repository. If not, see https://www.gnu.org/licenses/.
---
© 2022 Daniil Fajnberg
[github-last-commit]: https://github.com/daniil-berg/asyncio-taskpool/commits
[github-last-commit-img]: https://img.shields.io/github/last-commit/daniil-berg/asyncio-taskpool?label=Last%20commit&logo=git&
[gist-cloc-code-img]: https://img.shields.io/endpoint?logo=python&color=blue&url=https://gist.githubusercontent.com/daniil-berg/3f8240a976e8781a765d9c74a583dcda/raw/cloc-code.json
[gist-cloc-comments-img]: https://img.shields.io/endpoint?logo=sharp&color=lightgrey&url=https://gist.githubusercontent.com/daniil-berg/3f8240a976e8781a765d9c74a583dcda/raw/cloc-comments.json
[gist-test-coverage-img]: https://img.shields.io/endpoint?logo=pytest&color=blue&url=https://gist.githubusercontent.com/daniil-berg/3f8240a976e8781a765d9c74a583dcda/raw/test-coverage.json
[lgpl3]: https://www.gnu.org/licenses/lgpl-3.0
[lgpl3-img]: https://img.shields.io/badge/License-LGPL_v3.0-darkgreen.svg?logo=gnu
[pypi-latest-version-img]: https://img.shields.io/pypi/v/asyncio-taskpool?color=teal&logo=pypi
[pypi-latest-version]: https://pypi.org/project/asyncio-taskpool/

46
cloc.sh Executable file
View File

@ -0,0 +1,46 @@
#!/usr/bin/env bash
# This file is part of asyncio-taskpool.
# asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
# version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
# asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
# without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
# If not, see <https://www.gnu.org/licenses/>.
typeset option
if getopts 'bcmp' option; then
if [[ ${option} == [bcmp] ]]; then
shift
else
echo >&2 "Invalid option '$1' provided"
exit 1
fi
fi
typeset source=$1
if [[ -z ${source} ]]; then
echo >&2 Source file/directory missing
exit 1
fi
typeset blank code comment commentpercent
read blank comment code commentpercent < <( \
cloc --csv --quiet --hide-rate --include-lang Python ${source} |
awk -F, '$2 == "SUM" {printf ("%d %d %d %1.0f", $3, $4, $5, 100 * $4 / ($5 + $4)); exit}'
)
case ${option} in
b) echo ${blank} ;;
c) echo ${code} ;;
m) echo ${comment} ;;
p) echo ${commentpercent} ;;
*) echo Blank lines: ${blank}
echo Lines of comments: ${comment}
echo Lines of code: ${code}
echo Comment percentage: ${commentpercent} ;;
esac

View File

@ -1,3 +1,25 @@
#!/usr/bin/env sh #!/usr/bin/env bash
coverage erase && coverage run -m unittest discover && coverage report # This file is part of asyncio-taskpool.
# asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
# version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
# asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
# without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
# If not, see <https://www.gnu.org/licenses/>.
coverage erase
coverage run 2> /dev/null
typeset report=$(coverage report)
typeset total=$(echo "${report}" | awk '$1 == "TOTAL" {print $NF; exit}')
if [[ ${total} == 100% ]]; then
echo ${total}
else
echo "${report}"
fi

20
docs/Makefile Normal file
View File

@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

35
docs/make.bat Normal file
View File

@ -0,0 +1,35 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd

View File

7
docs/source/api/api.rst Normal file
View File

@ -0,0 +1,7 @@
API
===
.. toctree::
:maxdepth: 4
asyncio_taskpool

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.control.client module
=======================================
.. automodule:: asyncio_taskpool.control.client
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,16 @@
asyncio\_taskpool.control package
=================================
.. automodule:: asyncio_taskpool.control
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
.. toctree::
:maxdepth: 4
asyncio_taskpool.control.client
asyncio_taskpool.control.server

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.control.server module
=======================================
.. automodule:: asyncio_taskpool.control.server
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.exceptions module
===================================
.. automodule:: asyncio_taskpool.exceptions
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.pool module
=============================
.. automodule:: asyncio_taskpool.pool
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
asyncio\_taskpool.queue\_context module
=======================================
.. automodule:: asyncio_taskpool.queue_context
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,25 @@
asyncio\_taskpool package
=========================
.. automodule:: asyncio_taskpool
:members:
:undoc-members:
:show-inheritance:
Subpackages
-----------
.. toctree::
:maxdepth: 4
asyncio_taskpool.control
Submodules
----------
.. toctree::
:maxdepth: 4
asyncio_taskpool.exceptions
asyncio_taskpool.pool
asyncio_taskpool.queue_context

60
docs/source/conf.py Normal file
View File

@ -0,0 +1,60 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- Project information -----------------------------------------------------
project = 'asyncio-taskpool'
copyright = '2022 Daniil Fajnberg'
author = 'Daniil Fajnberg'
# The full version, including alpha/beta/rc tags
release = '1.1.4'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.duration',
'sphinx.ext.napoleon'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_theme_options = {
'style_external_links': True,
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']

58
docs/source/index.rst Normal file
View File

@ -0,0 +1,58 @@
.. This file is part of asyncio-taskpool.
.. asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
.. asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
.. You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>.
.. Copyright © 2022 Daniil Fajnberg
Welcome to the asyncio-taskpool documentation!
==============================================
:code:`asyncio-taskpool` is a Python library for dynamically and conveniently managing pools of `asyncio <https://docs.python.org/3/library/asyncio.html>`_ tasks.
Purpose
-------
A `task <https://docs.python.org/3/library/asyncio-task.html>`_ is a very powerful tool of concurrency in the Python world. Since concurrency always implies doing more than one thing a time, you rarely deal with just one :code:`Task` instance. However, managing multiple tasks can become a bit cumbersome quickly, as their number increases. Moreover, especially in long-running code, you may find it useful (or even necessary) to dynamically adjust the extent to which the work is distributed, i.e. increase or decrease the number of tasks.
With that in mind, this library aims to provide two things:
#. An additional layer of abstraction and convenience for managing multiple tasks.
#. A simple interface for dynamically adding and removing tasks when a program is already running.
The first is achieved through the concept of a :doc:`task pool <pages/pool>`. The second is achieved by adding a :doc:`control server <pages/control>` to the task pool.
Installation
------------
.. code-block:: bash
$ pip install asyncio-taskpool
Contents
--------
.. toctree::
:maxdepth: 2
pages/pool
pages/ids
pages/control
api/api
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -0,0 +1,107 @@
.. This file is part of asyncio-taskpool.
.. asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
.. asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
.. You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>.
.. Copyright © 2022 Daniil Fajnberg
Control interface
=================
When you are dealing with programs that run for a long period of time or even as daemons (i.e. indefinitely), having a way to adjust their behavior without needing to stop and restart them can be desirable.
Task pools offer a high degree of flexibility regarding the number and kind of tasks that run within them, by providing methods to easily start and stop tasks and task groups. But without additional tools, they only allow you to establish a control logic *a priori*, as demonstrated in :ref:`this code snippet <simple-control-logic>`.
What if you have a long-running program that executes certain tasks concurrently, but you don't know in advance how many of them you'll need? What if you want to be able to adjust the number of tasks manually **without stopping the task pool**?
The control server
------------------
The :code:`asyncio-taskpool` library comes with a simple control interface for managing task pools that are already running, at the heart of which is the :py:class:`ControlServer <asyncio_taskpool.control.server.ControlServer>`. Any task pool can be passed to a control server. Once the server is running, you can issue commands to it either via TCP or via UNIX socket. The commands map directly to the task pool methods.
To enable control over a :py:class:`SimpleTaskPool <asyncio_taskpool.pool.SimpleTaskPool>` via local TCP port :code:`8001`, all you need to do is this:
.. code-block:: python
:caption: main.py
:name: control-server-minimal
from asyncio_taskpool import SimpleTaskPool
from asyncio_taskpool.control import TCPControlServer
from .work import any_worker_func
async def main():
...
pool = SimpleTaskPool(any_worker_func, kwargs={'foo': 42, 'bar': some_object})
control = await TCPControlServer(pool, host='127.0.0.1', port=8001).serve_forever()
await control
Under the hood, the :py:class:`ControlServer <asyncio_taskpool.control.server.ControlServer>` simply uses :code:`asyncio.start_server` for instantiating a socket server. The resulting control task will run indefinitely. Cancelling the control task stops the server.
In reality, you would probably want some graceful handler for an interrupt signal that cancels any remaining tasks as well as the serving control task.
The control client
------------------
Technically, any process that can read from and write to the socket exposed by the control server, will be able to interact with it. The :code:`asyncio-taskpool` package has its own simple implementation in the form of the :py:class:`ControlClient <asyncio_taskpool.control.client.ControlClient>` that makes it easy to use out of the box.
To start a client, you can use the main script of the :py:mod:`asyncio_taskpool.control` sub-package like this:
.. code-block:: bash
$ python -m asyncio_taskpool.control tcp localhost 8001
This would establish a connection to the control server from the previous example. Calling
.. code-block:: bash
$ python -m asyncio_taskpool.control -h
will display the available client options.
The control session
-------------------
Assuming you connected successfully, you should be greeted by the server with a help message and dropped into a simple input prompt.
.. code-block:: none
Connected to SimpleTaskPool-0
Type '-h' to get help and usage instructions for all available commands.
>
The input sent to the server is handled by a typical argument parser, so the interface should be straight-forward. A command like
.. code-block:: none
> start 5
will call the :py:meth:`.start() <asyncio_taskpool.pool.SimpleTaskPool.start>` method with :code:`5` as an argument and thus start 5 new tasks in the pool, while the command
.. code-block:: none
> pool-size
will call the :py:meth:`.pool_size <asyncio_taskpool.pool.BaseTaskPool.pool_size>` property getter and return the maximum number of tasks you that can run in the pool.
When you are dealing with a regular :py:class:`TaskPool <asyncio_taskpool.pool.TaskPool>` instance, starting new tasks works just fine, as long as the coroutine functions you want to use can be imported into the namespace of the pool. If you have a function named :code:`worker` in the module :code:`mymodule` under the package :code:`mypackage` and want to use it in a :py:meth:`.map() <asyncio_taskpool.pool.TaskPool.map>` call with the arguments :code:`'x'`, :code:`'x'`, and :code:`'z'`, you would do it like this:
.. code-block:: none
> map mypackage.mymodule.worker ['x','y','z'] -n 3
The :code:`-n` is a shorthand for :code:`--num-concurrent` in this case. In general, all (public) pool methods will have a corresponding command in the control session.
.. note::
The :code:`ast.literal_eval` function from the `standard library <https://docs.python.org/3/library/ast.html#ast.literal_eval>`_ is used to safely evaluate the iterable of arguments to work on. For obvious reasons, being able to provide arbitrary python objects in such a control session is neither practical nor secure. The way this is implemented now is limited in that regard, since you can only use Python literals and containers as arguments for your coroutine functions.
To exit a control session, use the :code:`exit` command or simply press :code:`Ctrl + D`.

42
docs/source/pages/ids.rst Normal file
View File

@ -0,0 +1,42 @@
.. This file is part of asyncio-taskpool.
.. asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
.. asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
.. You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>.
.. Copyright © 2022 Daniil Fajnberg
IDs, groups & names
===================
Task IDs
--------
Every task spawned within a pool receives an ID, which is an integer greater or equal to 0 that is unique **within that task pool instance**. An internal counter is incremented whenever a new task is spawned. A task with ID :code:`n` was the :code:`(n+1)`-th task to be spawned in the pool. Task IDs can be used to cancel specific tasks using the :py:meth:`.cancel() <asyncio_taskpool.pool.BaseTaskPool.cancel>` method.
In practice, it should rarely be necessary to target *specific* tasks. When dealing with a regular :py:class:`TaskPool <asyncio_taskpool.pool.TaskPool>` instance, you would typically cancel entire task groups (see below) rather than individual tasks, whereas with :py:class:`SimpleTaskPool <asyncio_taskpool.pool.SimpleTaskPool>` instances you would indiscriminately cancel a number of tasks using the :py:meth:`.stop() <asyncio_taskpool.pool.SimpleTaskPool.stop>` method.
The ID of a pool task also appears in the task's name, which is set upon spawning it. (See `here <https://docs.python.org/3/library/asyncio-task.html#asyncio.Task.set_name>`_ for the associated method of the :code:`Task` class.)
Task groups
-----------
Every method of spawning new tasks in a task pool will add them to a **task group** and return the name of that group. With :py:class:`TaskPool <asyncio_taskpool.pool.TaskPool>` methods such as :py:meth:`.apply() <asyncio_taskpool.pool.TaskPool.apply>` and :py:meth:`.map() <asyncio_taskpool.pool.TaskPool.map>`, the group name can be set explicitly via the :code:`group_name` parameter. By default, the name will be a string containing some meta information depending on which method is used. Passing an existing task group name in any of those methods will result in a :py:class:`InvalidGroupName <asyncio_taskpool.exceptions.InvalidGroupName>` error.
You can cancel entire task groups using the :py:meth:`.cancel_group() <asyncio_taskpool.pool.BaseTaskPool.cancel_group>` method by passing it the group name. To check which tasks belong to a group, the :py:meth:`.get_group_ids() <asyncio_taskpool.pool.BaseTaskPool.get_group_ids>` method can be used, which takes group names and returns the IDs of the tasks belonging to them.
The :py:meth:`SimpleTaskPool.start() <asyncio_taskpool.pool.SimpleTaskPool.start>` method will create a new group as well, each time it is called, but it does not allow customizing the group name. Typically, it will not be necessary to keep track of groups in a :py:class:`SimpleTaskPool <asyncio_taskpool.pool.SimpleTaskPool>` instance.
Task groups do not impose limits on the number of tasks in them, although they can be indirectly constrained by pool size limits.
Pool names
----------
When initializing a task pool, you can provide a custom name for it, which will appear in its string representation, e.g. when using it in a :code:`print()`. A class attribute keeps track of initialized task pools and assigns each one an index (similar to IDs for pool tasks). If no name is specified when creating a new pool, its index is used in the string representation of it. Pool names can be helpful when using multiple pools and analyzing log messages.

233
docs/source/pages/pool.rst Normal file
View File

@ -0,0 +1,233 @@
.. This file is part of asyncio-taskpool.
.. asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
.. asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
.. You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>.
.. Copyright © 2022 Daniil Fajnberg
Task pools
==========
What is a task pool?
--------------------
A task pool is an object with a simple interface for aggregating and dynamically managing asynchronous tasks.
To make use of task pools, your code obviously needs to contain coroutine functions (introduced with the :code:`async def` keywords). By adding such functions along with their arguments to a task pool, they are turned into tasks and executed asynchronously.
If you are familiar with the :code:`Pool` class of the `multiprocessing module <https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.pool>`_ from the standard library, then you should feel at home with the :py:class:`TaskPool <asyncio_taskpool.pool.TaskPool>` class. Obviously, there are major conceptual and functional differences between the two, but the methods provided by the :py:class:`TaskPool <asyncio_taskpool.pool.TaskPool>` follow a very similar logic. If you never worked with process or thread pools, don't worry. Task pools are much simpler.
The :code:`TaskPool` class
--------------------------
There are essentially two distinct use cases for a concurrency pool. You want to
#. execute a function *n* times with the same arguments concurrently or
#. execute a function *n* times with different arguments concurrently.
The first is accomplished with the :py:meth:`TaskPool.apply() <asyncio_taskpool.pool.TaskPool.apply>` method, while the second is accomplished with the :py:meth:`TaskPool.map() <asyncio_taskpool.pool.TaskPool.map>` method and its variations :py:meth:`.starmap() <asyncio_taskpool.pool.TaskPool.starmap>` and :py:meth:`.doublestarmap() <asyncio_taskpool.pool.TaskPool.doublestarmap>`.
Let's take a look at an example. Say you have a coroutine function that takes two queues as arguments: The first one being an input-queue (containing items to work on) and the second one being the output queue (for passing on the results to some other function). Your function may look something like this:
.. code-block:: python
:caption: work.py
:name: queue-worker-function
from asyncio.queues import Queue
async def queue_worker_function(in_queue: Queue, out_queue: Queue) -> None:
while True:
item = await in_queue.get()
... # Do some work on the item and arrive at a result.
await out_queue.put(result)
How would we go about concurrently executing this function, say 5 times? There are (as always) a number of ways to do this with :code:`asyncio`. If we want to use tasks and be clean about it, we can do it like this:
.. code-block:: python
:caption: main.py
from asyncio.tasks import create_task, gather
from .work import queue_worker_function
...
# We assume that the queues have been initialized already.
tasks = []
for _ in range(5):
new_task = create_task(queue_worker_function(q_in, q_out))
tasks.append(new_task)
# Run some other code and let the tasks do their thing.
...
# At some point, we want the tasks to stop waiting for new items and end.
for task in tasks:
task.cancel()
...
await gather(*tasks)
By contrast, here is how you would do it with a task pool:
.. code-block:: python
:caption: main.py
from asyncio_taskpool import TaskPool
from .work import queue_worker_function
...
pool = TaskPool()
group_name = pool.apply(queue_worker_function, args=(q_in, q_out), num=5)
...
pool.cancel_group(group_name)
...
await pool.flush()
Pretty much self-explanatory, no? (See :doc:`here <./ids>` for more information about groups/names).
Let's consider a slightly more involved example. Assume you have a coroutine function that takes just one argument (some data) as input, does some work with it (maybe connects to the internet in the process), and eventually writes its results to a database (which is globally defined). Here is how that might look:
.. code-block:: python
:caption: work.py
:name: another-worker-function
from .my_database_stuff import insert_into_results_table
async def another_worker_function(data: object) -> None:
if data.some_attribute > 1:
...
# Do the work, arrive at results.
await insert_into_results_table(results)
Say we have some *iterator* of data-items (of arbitrary length) that we want to be worked on, and say we want 5 coroutines concurrently working on that data. Here is a very naive task-based solution:
.. code-block:: python
:caption: main.py
from asyncio.tasks import create_task, gather
from .work import another_worker_function
async def main():
...
# We got our data_iterator from somewhere.
keep_going = True
while keep_going:
tasks = []
for _ in range(5):
try:
data = next(data_iterator)
except StopIteration:
keep_going = False
break
new_task = create_task(another_worker_function(data))
tasks.append(new_task)
await gather(*tasks)
Here we already run into problems with the task-based approach. The last line in our :code:`while`-loop blocks until **all 5 tasks** return (or raise an exception). This means that as soon as one of them returns, the number of working coroutines is already less than 5 (until all the others return). This can obviously be solved in different ways. We could, for instance, wrap the creation of new tasks itself in a coroutine, which immediately creates a new task, when one is finished, and then call that coroutine 5 times concurrently. Or we could use the queue-based approach from before, but then we would need to write some queue producing coroutine.
Or we could use a task pool:
.. code-block:: python
:caption: main.py
from asyncio_taskpool import TaskPool
from .work import another_worker_function
async def main():
...
pool = TaskPool()
pool.map(another_worker_function, data_iterator, num_concurrent=5)
...
await pool.gather_and_close()
Calling the :py:meth:`.map() <asyncio_taskpool.pool.TaskPool.map>` method this way ensures that there will **always** -- i.e. at any given moment in time -- be exactly 5 tasks working concurrently on our data (assuming no other pool interaction).
The :py:meth:`.gather_and_close() <asyncio_taskpool.pool.BaseTaskPool.gather_and_close>` line will block until **all the data** has been consumed. (see :ref:`blocking-pool-methods`)
.. note::
Neither :py:meth:`.apply() <asyncio_taskpool.pool.TaskPool.apply>` nor :py:meth:`.map() <asyncio_taskpool.pool.TaskPool.map>` return coroutines. When they are called, the task pool immediately begins scheduling new tasks to run. No :code:`await` needed.
It can't get any simpler than that, can it? So glad you asked...
The :code:`SimpleTaskPool` class
--------------------------------
Let's take the :ref:`queue worker example <queue-worker-function>` from before. If we know that the task pool will only ever work with that one function with the same queue objects, we can make use of the :py:class:`SimpleTaskPool <asyncio_taskpool.pool.SimpleTaskPool>` class:
.. code-block:: python
:caption: main.py
from asyncio_taskpool import SimpleTaskPool
from .work import queue_worker_function
async def main():
...
pool = SimpleTaskPool(queue_worker_function, args=(q_in, q_out))
pool.start(5)
...
pool.stop_all()
...
await pool.gather_and_close()
This may, at first glance, not seem like much of a difference, aside from different method names. However, assume that our main function runs a loop and needs to be able to periodically regulate the number of tasks being executed in the pool based on some additional variables it receives. With the :py:class:`SimpleTaskPool <asyncio_taskpool.pool.SimpleTaskPool>`, this could not be simpler:
.. code-block:: python
:caption: main.py
:name: simple-control-logic
from asyncio_taskpool import SimpleTaskPool
from .work import queue_worker_function
async def main():
...
pool = SimpleTaskPool(queue_worker_function, args=(q_in, q_out))
await pool.start(5)
while True:
...
if some_condition and pool.num_running > 10:
pool.stop(3)
elif some_other_condition and pool.num_running < 5:
pool.start(5)
else:
pool.start(1)
...
await pool.gather_and_close()
Notice how we only specify the function and its arguments during initialization of the pool. From that point on, all we need is the :py:meth:`.start() <asyncio_taskpool.pool.SimpleTaskPool.start>` add :py:meth:`.stop() <asyncio_taskpool.pool.SimpleTaskPool.stop>` methods to adjust the number of concurrently running tasks.
The trade-off here is that this simplified task pool class lacks the flexibility of the regular :py:class:`TaskPool <asyncio_taskpool.pool.TaskPool>` class. On an instance of the latter we can call :py:meth:`.map() <asyncio_taskpool.pool.TaskPool.map>` and :py:meth:`.apply() <asyncio_taskpool.pool.TaskPool.apply>` as often as we like with completely unrelated functions and arguments. With a :py:class:`SimpleTaskPool <asyncio_taskpool.pool.SimpleTaskPool>`, once you initialize it, it is pegged to one function and one set of arguments, and all you can do is control the number of tasks working with those.
This simplified interface becomes particularly useful in conjunction with the :doc:`control server <./control>`.
.. _blocking-pool-methods:
(Non-)Blocking pool methods
---------------------------
One of the main concerns when dealing with concurrent programs in general and with :code:`async` functions in particular is when and how a particular piece of code **blocks** during execution, i.e. delays the execution of the following code significantly.
.. note::
Every statement will block to *some* extent. Obviously, when a program does something, that takes time. This is why the proper question to ask is not *if* but *to what extent, under which circumstances* the execution of a particular line of code blocks.
It is fair to assume that anyone reading this is familiar enough with the concepts of asynchronous programming in Python to know that just slapping :code:`async` in front of a function definition will not magically make it suitable for concurrent execution (in any meaningful way). Therefore, we assume that you are dealing with coroutines that can actually unblock the `event loop <https://docs.python.org/3/library/asyncio-eventloop.html>`_ (e.g. doing a significant amount of I/O).
So how does the task pool behave in that regard?
The only method of a pool that one should **always** assume to be blocking is :py:meth:`.gather_and_close() <asyncio_taskpool.pool.BaseTaskPool.gather_and_close>`. This method awaits **all** tasks in the pool, meaning as long as one of them is still running, this coroutine will not return.
.. warning::
This includes awaiting any callbacks that were passed along with the tasks.
One method to be aware of is :py:meth:`.flush() <asyncio_taskpool.pool.BaseTaskPool.flush>`. Since it will await only those tasks that the pool considers **ended** or **cancelled**, the blocking can only come from any callbacks that were provided for either of those situations.
All methods that add tasks to a pool, i.e. :py:meth:`TaskPool.map() <asyncio_taskpool.pool.TaskPool.map>` (and its variants), :py:meth:`TaskPool.apply() <asyncio_taskpool.pool.TaskPool.apply>` and :py:meth:`SimpleTaskPool.start() <asyncio_taskpool.pool.SimpleTaskPool.start>`, are non-blocking by design. They all make use of "meta tasks" under the hood and return immediately. It is important however, to realize that just because they return, does not mean that any actual tasks have been spawned. For example, if a pool size limit was set and there was "no more room" in the pool when :py:meth:`.map() <asyncio_taskpool.pool.TaskPool.map>` was called, there is **no guarantee** that even a single task has started, when it returns.

View File

@ -1,2 +1,4 @@
-r common.txt -r common.txt
coverage coverage
sphinx
sphinx-rtd-theme

View File

@ -1,17 +1,25 @@
[metadata] [metadata]
name = asyncio-taskpool name = asyncio-taskpool
version = 0.1.3 version = 1.1.4
author = Daniil Fajnberg author = Daniil Fajnberg
author_email = mail@daniil.fajnberg.de author_email = mail@daniil.fajnberg.de
description = Dynamically manage pools of asyncio tasks description = Dynamically manage pools of asyncio tasks
long_description = file: README.md long_description = file: README.md
long_description_content_type = text/markdown long_description_content_type = text/markdown
keywords = asyncio, concurrency, tasks, coroutines, asynchronous, server
url = https://git.fajnberg.de/daniil/asyncio-taskpool url = https://git.fajnberg.de/daniil/asyncio-taskpool
project_urls = project_urls =
Bug Tracker = https://git.fajnberg.de/daniil/asyncio-taskpool/issues Bug Tracker = https://github.com/daniil-berg/asyncio-taskpool/issues
classifiers = classifiers =
Development Status :: 5 - Production/Stable
Programming Language :: Python :: 3 Programming Language :: Python :: 3
Operating System :: OS Independent Operating System :: OS Independent
License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)
Intended Audience :: Developers
Intended Audience :: System Administrators
Framework :: AsyncIO
Topic :: Software Development :: Libraries
Topic :: Software Development :: Libraries :: Python Modules
[options] [options]
package_dir = package_dir =
@ -22,6 +30,8 @@ python_requires = >=3.8
[options.extras_require] [options.extras_require]
dev = dev =
coverage coverage
sphinx
sphinx-rtd-theme
[options.packages.find] [options.packages.find]
where = src where = src

View File

@ -1,2 +1,18 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
from .pool import TaskPool, SimpleTaskPool from .pool import TaskPool, SimpleTaskPool
from .server import UnixControlServer

View File

@ -1,46 +0,0 @@
import sys
from argparse import ArgumentParser
from asyncio import run
from pathlib import Path
from typing import Dict, Any
from .client import ControlClient, UnixControlClient
from .constants import PACKAGE_NAME
from .pool import TaskPool
from .server import ControlServer
CONN_TYPE = 'conn_type'
UNIX, TCP = 'unix', 'tcp'
SOCKET_PATH = 'path'
def parse_cli() -> Dict[str, Any]:
parser = ArgumentParser(
prog=PACKAGE_NAME,
description=f"CLI based {ControlClient.__name__} for {PACKAGE_NAME}"
)
subparsers = parser.add_subparsers(title="Connection types", dest=CONN_TYPE)
unix_parser = subparsers.add_parser(UNIX, help="Connect via unix socket")
unix_parser.add_argument(
SOCKET_PATH,
type=Path,
help=f"Path to the unix socket on which the {ControlServer.__name__} for the {TaskPool.__name__} is listening."
)
return vars(parser.parse_args())
async def main():
kwargs = parse_cli()
if kwargs[CONN_TYPE] == UNIX:
client = UnixControlClient(path=kwargs[SOCKET_PATH])
elif kwargs[CONN_TYPE] == TCP:
# TODO: Implement the TCP client class
client = UnixControlClient(path=kwargs[SOCKET_PATH])
else:
print("Invalid connection type", file=sys.stderr)
sys.exit(2)
await client.start()
if __name__ == '__main__':
run(main())

View File

@ -1,63 +0,0 @@
import sys
from abc import ABC, abstractmethod
from asyncio.streams import StreamReader, StreamWriter, open_unix_connection
from pathlib import Path
from asyncio_taskpool import constants
from asyncio_taskpool.types import ClientConnT
class ControlClient(ABC):
@abstractmethod
async def open_connection(self, **kwargs) -> ClientConnT:
raise NotImplementedError
def __init__(self, **conn_kwargs) -> None:
self._conn_kwargs = conn_kwargs
self._connected: bool = False
async def _interact(self, reader: StreamReader, writer: StreamWriter) -> None:
try:
msg = input("> ").strip().lower()
except EOFError:
msg = constants.CLIENT_EXIT
except KeyboardInterrupt:
print()
return
if msg == constants.CLIENT_EXIT:
writer.close()
self._connected = False
return
try:
writer.write(msg.encode())
await writer.drain()
except ConnectionError as e:
self._connected = False
print(e, file=sys.stderr)
return
print((await reader.read(constants.MSG_BYTES)).decode())
async def start(self):
reader, writer = await self.open_connection(**self._conn_kwargs)
if reader is None:
print("Failed to connect.", file=sys.stderr)
return
self._connected = True
print("Connected to", (await reader.read(constants.MSG_BYTES)).decode())
while self._connected:
await self._interact(reader, writer)
print("Disconnected from control server.")
class UnixControlClient(ControlClient):
def __init__(self, **conn_kwargs) -> None:
self._socket_path = Path(conn_kwargs.pop('path'))
super().__init__(**conn_kwargs)
async def open_connection(self, **kwargs) -> ClientConnT:
try:
return await open_unix_connection(self._socket_path, **kwargs)
except FileNotFoundError:
print("No socket at", self._socket_path, file=sys.stderr)
return None, None

View File

@ -1,8 +0,0 @@
PACKAGE_NAME = 'asyncio_taskpool'
MSG_BYTES = 1024
CMD_START = 'start'
CMD_STOP = 'stop'
CMD_STOP_ALL = 'stop_all'
CMD_SIZE = 'size'
CMD_FUNC = 'func'
CLIENT_EXIT = 'exit'

View File

@ -0,0 +1,2 @@
from .server import TCPControlServer, UnixControlServer
from .client import TCPControlClient, UnixControlClient

View File

@ -0,0 +1,80 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
CLI entry point script for a :class:`ControlClient`.
"""
from argparse import ArgumentParser
from asyncio import run
from pathlib import Path
from typing import Any, Dict, Sequence
from ..internals.constants import PACKAGE_NAME
from ..pool import TaskPool
from .client import TCPControlClient, UnixControlClient
from .server import TCPControlServer, UnixControlServer
__all__ = []
CLIENT_CLASS = 'client_class'
UNIX, TCP = 'unix', 'tcp'
SOCKET_PATH = 'socket_path'
HOST, PORT = 'host', 'port'
def parse_cli(args: Sequence[str] = None) -> Dict[str, Any]:
parser = ArgumentParser(
prog=f'{PACKAGE_NAME}.control',
description=f"Simple CLI based control client for {PACKAGE_NAME}"
)
subparsers = parser.add_subparsers(title="Connection types")
tcp_parser = subparsers.add_parser(TCP, help="Connect via TCP socket")
tcp_parser.add_argument(
HOST,
help=f"IP address or url that the {TCPControlServer.__name__} for the {TaskPool.__name__} is listening on."
)
tcp_parser.add_argument(
PORT,
type=int,
help=f"Port that the {TCPControlServer.__name__} for the {TaskPool.__name__} is listening on."
)
tcp_parser.set_defaults(**{CLIENT_CLASS: TCPControlClient})
unix_parser = subparsers.add_parser(UNIX, help="Connect via unix socket")
unix_parser.add_argument(
SOCKET_PATH,
type=Path,
help=f"Path to the unix socket on which the {UnixControlServer.__name__} for the {TaskPool.__name__} is "
f"listening."
)
unix_parser.set_defaults(**{CLIENT_CLASS: UnixControlClient})
return vars(parser.parse_args(args))
async def main():
kwargs = parse_cli()
client_cls = kwargs.pop(CLIENT_CLASS)
await client_cls(**kwargs).start()
if __name__ == '__main__':
run(main())

View File

@ -0,0 +1,209 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Classes of control clients for a simply interface to a task pool control server.
"""
import json
import shutil
import sys
from abc import ABC, abstractmethod
from asyncio.streams import StreamReader, StreamWriter, open_connection
from pathlib import Path
from typing import Optional, Union
from ..internals.constants import CLIENT_INFO, SESSION_MSG_BYTES
from ..internals.types import ClientConnT, PathT
__all__ = [
'ControlClient',
'TCPControlClient',
'UnixControlClient',
'CLIENT_EXIT'
]
CLIENT_EXIT = 'exit'
class ControlClient(ABC):
"""
Abstract base class for a simple implementation of a pool control client.
Since the server's control interface is simply expecting commands to be sent, any process able to connect to the
TCP or UNIX socket and issue the relevant commands (and optionally read the responses) will work just as well.
This is a minimal working implementation.
"""
@staticmethod
def _client_info() -> dict:
"""Returns a dictionary of client information relevant for the handshake with the server."""
return {CLIENT_INFO.TERMINAL_WIDTH: shutil.get_terminal_size().columns}
@abstractmethod
async def _open_connection(self, **kwargs) -> ClientConnT:
"""
Tries to connect to a socket using the provided arguments and return the associated reader-writer-pair.
This method will be invoked by the public `start()` method with the pre-defined internal `_conn_kwargs`
(unpacked) as keyword-arguments.
This method should return either a tuple of `asyncio.StreamReader` and `asyncio.StreamWriter` or a tuple of
`None` and `None`, if it failed to establish the defined connection.
"""
raise NotImplementedError
def __init__(self, **conn_kwargs) -> None:
"""Simply stores the keyword-arguments for opening the connection."""
self._conn_kwargs = conn_kwargs
self._connected: bool = False
async def _server_handshake(self, reader: StreamReader, writer: StreamWriter) -> None:
"""
Performs the first interaction with the server providing it with the necessary client information.
Upon completion, the server's info is printed.
Args:
reader: The `asyncio.StreamReader` returned by the `_open_connection()` method
writer: The `asyncio.StreamWriter` returned by the `_open_connection()` method
"""
self._connected = True
writer.write(json.dumps(self._client_info()).encode())
writer.write(b'\n')
await writer.drain()
print("Connected to", (await reader.read(SESSION_MSG_BYTES)).decode())
print("Type '-h' to get help and usage instructions for all available commands.\n")
def _get_command(self, writer: StreamWriter) -> Optional[str]:
"""
Prompts the user for input and either returns it (after cleaning it up) or `None` in special cases.
Args:
writer: The `asyncio.StreamWriter` returned by the `_open_connection()` method
Returns:
`None`, if either `Ctrl+C` was hit, an empty or whitespace-only string was entered, or the user wants the
client to disconnect; otherwise, returns the user's input, stripped of leading and trailing spaces and
converted to lowercase.
"""
try:
cmd = input("> ").strip().lower()
except EOFError: # Ctrl+D shall be equivalent to the :const:`CLIENT_EXIT` command.
cmd = CLIENT_EXIT
except KeyboardInterrupt: # Ctrl+C shall simply reset to the input prompt.
print()
return
if cmd == CLIENT_EXIT:
writer.close()
self._connected = False
return
return cmd or None # will be None if `cmd` is an empty string
async def _interact(self, reader: StreamReader, writer: StreamWriter) -> None:
"""
Reacts to the user's command, potentially performing a back-and-forth interaction with the server.
If `_get_command` returns `None`, this may imply that the client disconnected, but may also just be `Ctrl+C`.
If an actual command is retrieved, it is written to the stream, a response is awaited and eventually printed.
Args:
reader: The `asyncio.StreamReader` returned by the `_open_connection()` method
writer: The `asyncio.StreamWriter` returned by the `_open_connection()` method
"""
cmd = self._get_command(writer)
if cmd is None:
return
try:
# Send the command to the server.
writer.write(cmd.encode())
writer.write(b'\n')
await writer.drain()
except ConnectionError as e:
self._connected = False
print(e, file=sys.stderr)
return
# Await the server's response, then print it.
print((await reader.read(SESSION_MSG_BYTES)).decode())
async def start(self) -> None:
"""
Opens connection, performs handshake, and enters interaction loop.
An input prompt is presented to the user and any input is sent (encoded) to the connected server.
One exception is the :const:`CLIENT_EXIT` command (equivalent to Ctrl+D), which merely closes the connection.
If the connection can not be established, an error message is printed to `stderr` and the method returns.
If either the exit command is issued or the connection to the server is lost during the interaction loop,
the method returns and prints out a disconnected-message.
"""
reader, writer = await self._open_connection(**self._conn_kwargs)
if reader is None:
print("Failed to connect.", file=sys.stderr)
return
await self._server_handshake(reader, writer)
while self._connected:
await self._interact(reader, writer)
print("Disconnected from control server.")
class TCPControlClient(ControlClient):
"""Task pool control client for connecting to a :class:`TCPControlServer`."""
def __init__(self, host: str, port: Union[int, str], **conn_kwargs) -> None:
"""`host` and `port` are expected as non-optional connection arguments."""
self._host = host
self._port = port
super().__init__(**conn_kwargs)
async def _open_connection(self, **kwargs) -> ClientConnT:
"""
Wrapper around the `asyncio.open_connection` function.
Returns a tuple of `None` and `None`, if the connection can not be established;
otherwise, the stream-reader and -writer tuple is returned.
"""
try:
return await open_connection(self._host, self._port, **kwargs)
except ConnectionError as e:
print(str(e), file=sys.stderr)
return None, None
class UnixControlClient(ControlClient):
"""Task pool control client for connecting to a :class:`UnixControlServer`."""
def __init__(self, socket_path: PathT, **conn_kwargs) -> None:
"""`socket_path` is expected as a non-optional connection argument."""
from asyncio.streams import open_unix_connection
self._open_unix_connection = open_unix_connection
self._socket_path = Path(socket_path)
super().__init__(**conn_kwargs)
async def _open_connection(self, **kwargs) -> ClientConnT:
"""
Wrapper around the `asyncio.open_unix_connection` function.
Returns a tuple of `None` and `None`, if the socket is not found at the pre-defined path;
otherwise, the stream-reader and -writer tuple is returned.
"""
try:
return await self._open_unix_connection(self._socket_path, **kwargs)
except FileNotFoundError:
print("No socket at", self._socket_path, file=sys.stderr)
return None, None

View File

@ -0,0 +1,342 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Definition of the :class:`ControlParser` used in a
:class:`ControlSession <asyncio_taskpool.control.session.ControlSession>`.
It should not be considered part of the public API.
"""
import logging
from argparse import Action, ArgumentParser, ArgumentDefaultsHelpFormatter, HelpFormatter, ArgumentTypeError, SUPPRESS
from ast import literal_eval
from inspect import Parameter, getmembers, isfunction, signature
from io import StringIO
from shutil import get_terminal_size
from typing import Any, Callable, Container, Dict, Iterable, Set, Type, TypeVar
from ..exceptions import HelpRequested, ParserError
from ..internals.constants import CLIENT_INFO, CMD
from ..internals.helpers import get_first_doc_line, resolve_dotted_path
from ..internals.types import ArgsT, CancelCB, CoroutineFunc, EndCB, KwArgsT
__all__ = ['ControlParser']
log = logging.getLogger(__name__)
FmtCls = TypeVar('FmtCls', bound=Type[HelpFormatter])
ParsersDict = Dict[str, 'ControlParser']
OMIT_PARAMS_DEFAULT = ('self', )
NAME, PROG, HELP, DESCRIPTION = 'name', 'prog', 'help', 'description'
class ControlParser(ArgumentParser):
"""
Subclass of the standard :code:`argparse.ArgumentParser` for pool control.
Such a parser is not supposed to ever print to stdout/stderr, but instead direct all messages to a file-like
`StringIO` instance passed to it during initialization.
Furthermore, it requires defining the width of the terminal, to adjust help formatting to the terminal size of a
connected client.
Finally, it offers some convenience methods and makes use of custom exceptions.
"""
@staticmethod
def help_formatter_factory(terminal_width: int, base_cls: FmtCls = None) -> FmtCls:
"""
Constructs and returns a subclass of :class:`argparse.HelpFormatter`
The formatter class will have the defined `terminal_width`.
Although a custom formatter class can be explicitly passed into the :class:`ArgumentParser` constructor,
this is not as convenient, when making use of sub-parsers.
Args:
terminal_width:
The number of columns of the terminal to which to adjust help formatting.
base_cls (optional):
Base class to use for inheritance. By default :class:`argparse.ArgumentDefaultsHelpFormatter` is used.
Returns:
The subclass of `base_cls` which fixes the constructor's `width` keyword-argument to `terminal_width`.
"""
if base_cls is None:
base_cls = ArgumentDefaultsHelpFormatter
class ClientHelpFormatter(base_cls):
def __init__(self, *args, **kwargs) -> None:
kwargs['width'] = terminal_width
super().__init__(*args, **kwargs)
return ClientHelpFormatter
def __init__(self, stream: StringIO, terminal_width: int = None, **kwargs) -> None:
"""
Sets some internal attributes in addition to the base class.
Args:
stream:
A file-like I/O object to use for message output.
terminal_width (optional):
The terminal width to use for all message formatting. By default the :code:`columns` attribute from
:func:`shutil.get_terminal_size` is taken.
**kwargs(optional):
Passed to the parent class constructor. The exception is the `formatter_class` parameter: Even if a
class is specified, it will always be subclassed in the :meth:`help_formatter_factory`.
"""
self._stream: StringIO = stream
self._terminal_width: int = terminal_width if terminal_width is not None else get_terminal_size().columns
kwargs['formatter_class'] = self.help_formatter_factory(self._terminal_width, kwargs.get('formatter_class'))
super().__init__(**kwargs)
self._flags: Set[str] = set()
self._commands = None
def add_function_command(self, function: Callable, omit_params: Container[str] = OMIT_PARAMS_DEFAULT,
**subparser_kwargs) -> 'ControlParser':
"""
Takes a function and adds a corresponding (sub-)command to the parser.
The :meth:`add_subparsers` method must have been called prior to this.
NOTE: Currently, only a limited spectrum of parameters can be accurately converted to parser arguments.
This method works correctly with any public method of the any task pool class.
Args:
function:
The reference to the function to be "converted" to a parser command.
omit_params (optional):
Names of function parameters not to add as parser arguments.
**subparser_kwargs (optional):
Passed directly to the :meth:`add_parser` method.
Returns:
The subparser instance created from the function.
"""
subparser_kwargs.setdefault(NAME, function.__name__.replace('_', '-'))
subparser_kwargs.setdefault(PROG, subparser_kwargs[NAME])
subparser_kwargs.setdefault(HELP, get_first_doc_line(function))
subparser_kwargs.setdefault(DESCRIPTION, subparser_kwargs[HELP])
subparser: ControlParser = self._commands.add_parser(**subparser_kwargs)
subparser.add_function_args(function, omit_params)
return subparser
def add_property_command(self, prop: property, cls_name: str = '', **subparser_kwargs) -> 'ControlParser':
"""
Same as the :meth:`add_function_command` method, but for properties.
Args:
prop:
The reference to the property to be "converted" to a parser command.
cls_name (optional):
Name of the class the property is defined on to appear in the command help text.
**subparser_kwargs (optional):
Passed directly to the :meth:`add_parser` method.
Returns:
The subparser instance created from the property.
"""
subparser_kwargs.setdefault(NAME, prop.fget.__name__.replace('_', '-'))
subparser_kwargs.setdefault(PROG, subparser_kwargs[NAME])
getter_help = get_first_doc_line(prop.fget)
if prop.fset is None:
subparser_kwargs.setdefault(HELP, getter_help)
else:
subparser_kwargs.setdefault(HELP, f"Get/set the `{cls_name}.{subparser_kwargs[NAME]}` property")
subparser_kwargs.setdefault(DESCRIPTION, subparser_kwargs[HELP])
subparser: ControlParser = self._commands.add_parser(**subparser_kwargs)
if prop.fset is not None:
_, param = signature(prop.fset).parameters.values()
setter_arg_help = f"If provided: {get_first_doc_line(prop.fset)} If omitted: {getter_help}"
subparser.add_function_arg(param, nargs='?', default=SUPPRESS, help=setter_arg_help)
return subparser
def add_class_commands(self, cls: Type, public_only: bool = True, omit_members: Container[str] = (),
member_arg_name: str = CMD) -> ParsersDict:
"""
Adds methods/properties of a class as (sub-)commands to the parser.
The :meth:`add_subparsers` method must have been called prior to this.
NOTE: Currently, only a limited spectrum of function parameters can be accurately converted to parser arguments.
This method works correctly with any task pool class.
Args:
cls:
The reference to the class whose methods/properties are to be "converted" to parser commands.
public_only (optional):
If `False`, protected and private members are considered as well. `True` by default.
omit_members (optional):
Names of functions/properties not to add as parser commands.
member_arg_name (optional):
After parsing the arguments, depending on which command was invoked by the user, the corresponding
method/property will be stored as an extra argument in the parsed namespace under this attribute name.
Returns:
Dictionary mapping class member names to the (sub-)parsers created from them.
"""
parsers: ParsersDict = {}
common_kwargs = {'stream': self._stream, CLIENT_INFO.TERMINAL_WIDTH: self._terminal_width}
for name, member in getmembers(cls):
if name in omit_members or (name.startswith('_') and public_only):
continue
if isfunction(member):
subparser = self.add_function_command(member, **common_kwargs)
elif isinstance(member, property):
subparser = self.add_property_command(member, cls.__name__, **common_kwargs)
else:
continue
subparser.set_defaults(**{member_arg_name: member})
parsers[name] = subparser
return parsers
def add_subparsers(self, *args, **kwargs):
"""Adds the subparsers action as an attribute before returning it."""
self._commands = super().add_subparsers(*args, **kwargs)
return self._commands
def _print_message(self, message: str, *args, **kwargs) -> None:
"""This is overridden to ensure that no messages are sent to stdout/stderr, but always to the stream buffer."""
if message:
self._stream.write(message)
def exit(self, status: int = 0, message: str = None) -> None:
"""This is overridden to prevent system exit to be invoked."""
if message:
self._print_message(message)
def error(self, message: str) -> None:
"""Raises the :exc:`ParserError <asyncio_taskpool.exceptions.ParserError>` exception at the end."""
super().error(message=message)
raise ParserError
def print_help(self, file=None) -> None:
"""Raises the :exc:`HelpRequested <asyncio_taskpool.exceptions.HelpRequested>` exception at the end."""
super().print_help(file)
raise HelpRequested
def add_function_arg(self, parameter: Parameter, **kwargs) -> Action:
"""
Takes an :class:`inspect.Parameter` and adds a corresponding parser argument.
NOTE: Currently, only a limited spectrum of parameters can be accurately converted to a parser argument.
This method works correctly with any parameter of any public method any task pool class.
Args:
parameter: The :class:`inspect.Parameter` object to be converted to a parser argument.
**kwargs: Passed to the :meth:`add_argument` method of the base class.
Returns:
The :class:`argparse.Action` returned by the :meth:`add_argument` method.
"""
if parameter.default is Parameter.empty:
# A non-optional function parameter should correspond to a positional argument.
name_or_flags = [parameter.name]
else:
flag = None
long = f'--{parameter.name.replace("_", "-")}'
# We try to generate a short version (flag) for the argument.
letter = parameter.name[0]
if letter not in self._flags:
flag = f'-{letter}'
self._flags.add(letter)
elif letter.upper() not in self._flags:
flag = f'-{letter.upper()}'
self._flags.add(letter.upper())
name_or_flags = [long] if flag is None else [flag, long]
if parameter.annotation is bool:
# If we are dealing with a boolean parameter, always use the 'store_true' action.
# Even if the parameter's default value is `True`, this will make the parser argument's default `False`.
kwargs.setdefault('action', 'store_true')
else:
# For now, any other type annotation will implicitly use the default action 'store'.
# In addition, we always set the default value.
kwargs.setdefault('default', parameter.default)
if parameter.kind == Parameter.VAR_POSITIONAL:
# This is to be able to later unpack an arbitrary number of positional arguments.
kwargs.setdefault('nargs', '*')
if not kwargs.get('action') == 'store_true':
# Set the type from the parameter annotation.
kwargs.setdefault('type', _get_type_from_annotation(parameter.annotation))
return self.add_argument(*name_or_flags, **kwargs)
def add_function_args(self, function: Callable, omit: Container[str] = OMIT_PARAMS_DEFAULT) -> None:
"""
Takes a function and adds its parameters as arguments to the parser.
NOTE: Currently, only a limited spectrum of parameters can be accurately converted to a parser argument.
This method works correctly with any public method of any task pool class.
Args:
function:
The function whose parameters are to be converted to parser arguments.
Its parameters must be properly annotated.
omit (optional):
Names of function parameters not to add as parser arguments.
"""
for param in signature(function).parameters.values():
if param.name not in omit:
# TODO: Look into parsing docstrings properly to try and extract argument help text.
# For now, the argument help just shows the type it will be converted to.
self.add_function_arg(param, help=repr(param.annotation))
def _get_arg_type_wrapper(cls: Type) -> Callable[[Any], Any]:
"""
Returns a wrapper for the constructor of `cls` to avoid a ValueError being raised on suppressed arguments.
See: https://bugs.python.org/issue36078
In addition, the type conversion wrapper catches exceptions not handled properly by the parser, logs them, and
turns them into `ArgumentTypeError` exceptions the parser can propagate to the client.
"""
def wrapper(arg: Any) -> Any:
if arg is SUPPRESS:
return arg
try:
return cls(arg)
except (ArgumentTypeError, TypeError, ValueError):
raise # handled properly by the parser and propagated to the client anyway
except Exception as e:
text = f"{e.__class__.__name__} occurred in parser trying to convert type: {cls.__name__}({repr(arg)})"
log.exception(text)
raise ArgumentTypeError(text) # propagate to the client
# Copy the name of the class to maintain useful help messages when incorrect arguments are passed.
wrapper.__name__ = cls.__name__
return wrapper
def _get_type_from_annotation(annotation: Type) -> Callable[[Any], Any]:
"""
Returns a type conversion function based on the `annotation` passed.
Required to properly convert parsed arguments to the type expected by certain pool methods.
Each conversion function is wrapped by `_get_arg_type_wrapper`.
`Callable`-type annotations give the `resolve_dotted_path` function.
`Iterable`- or args/kwargs-type annotations give the `ast.literal_eval` function.
Others pass unchanged (but still wrapped with `_get_arg_type_wrapper`).
"""
if any(annotation is t for t in {CoroutineFunc, EndCB, CancelCB}):
annotation = resolve_dotted_path
if any(annotation is t for t in {ArgsT, KwArgsT, Iterable[ArgsT], Iterable[KwArgsT]}):
annotation = literal_eval
return _get_arg_type_wrapper(annotation)

View File

@ -0,0 +1,182 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Task pool control server class definitions.
"""
import logging
from abc import ABC, abstractmethod
from asyncio import AbstractServer
from asyncio.exceptions import CancelledError
from asyncio.streams import StreamReader, StreamWriter, start_server
from asyncio.tasks import Task, create_task
from pathlib import Path
from typing import Optional, Union
from .client import ControlClient, TCPControlClient, UnixControlClient
from .session import ControlSession
from ..pool import AnyTaskPoolT
from ..internals.helpers import classmethod
from ..internals.types import ConnectedCallbackT, PathT
__all__ = ['ControlServer', 'TCPControlServer', 'UnixControlServer']
log = logging.getLogger(__name__)
class ControlServer(ABC):
"""
Abstract base class for a task pool control server.
This class acts as a wrapper around an async server instance and initializes a
:class:`ControlSession <asyncio_taskpool.control.session.ControlSession>` once a client connects to it.
The interface is defined within the session class.
"""
_client_class = ControlClient
@classmethod
@property
def client_class_name(cls) -> str:
"""Returns the name of the matching control client class."""
return cls._client_class.__name__
def __init__(self, pool: AnyTaskPoolT, **server_kwargs) -> None:
"""
Merely sets internal attributes, but does not start the server yet.
The task pool must be passed here and can not be set/changed afterwards. This means a control server is always
tied to one specific task pool.
Args:
pool:
An instance of a `BaseTaskPool` subclass to tie the server to.
**server_kwargs (optional):
Keyword arguments that will be passed into the function that starts the server.
"""
self._pool: AnyTaskPoolT = pool
self._server_kwargs = server_kwargs
self._server: Optional[AbstractServer] = None
@property
def pool(self) -> AnyTaskPoolT:
"""The task pool instance controlled by the server."""
return self._pool
def is_serving(self) -> bool:
"""Wrapper around the `asyncio.Server.is_serving` method."""
return self._server.is_serving()
async def _client_connected_cb(self, reader: StreamReader, writer: StreamWriter) -> None:
"""
The universal client callback that will be passed into the `_get_server_instance` method.
Instantiates a control session, performs the client handshake, and enters the session's `listen` loop.
"""
session = ControlSession(self, reader, writer)
await session.client_handshake()
await session.listen()
@abstractmethod
async def _get_server_instance(self, client_connected_cb: ConnectedCallbackT, **kwargs) -> AbstractServer:
"""
Initializes, starts, and returns an async server instance (Unix or TCP type).
Args:
client_connected_cb:
The callback for when a client connects to the server (as per `asyncio.start_server` or
`asyncio.start_unix_server`); will always be the internal `_client_connected_cb` method.
**kwargs (optional):
Keyword arguments to pass into the function that starts the server.
Returns:
The running server object (a type of `asyncio.Server`).
"""
raise NotImplementedError
@abstractmethod
def _final_callback(self) -> None:
"""The method to run after the server's `serve_forever` methods ends for whatever reason."""
raise NotImplementedError
async def _serve_forever(self) -> None:
"""
To be run as an `asyncio.Task` by the following method.
Serves as a wrapper around the the `asyncio.Server.serve_forever` method that ensures that the `_final_callback`
method is called, when the former method ends for whatever reason.
"""
try:
async with self._server:
await self._server.serve_forever()
except CancelledError:
log.debug("%s stopped", self.__class__.__name__)
finally:
self._final_callback()
async def serve_forever(self) -> Task:
"""
Starts the server and begins listening to client connections.
It should never block because the serving will be performed in a separate task.
Returns:
The forever serving task. To stop the server, this task should be cancelled.
"""
log.debug("Starting %s...", self.__class__.__name__)
self._server = await self._get_server_instance(self._client_connected_cb, **self._server_kwargs)
return create_task(self._serve_forever())
class TCPControlServer(ControlServer):
"""Exposes a TCP socket for control clients to connect to."""
_client_class = TCPControlClient
def __init__(self, pool: AnyTaskPoolT, host: str, port: Union[int, str], **server_kwargs) -> None:
"""`host` and `port` are expected as non-optional server arguments."""
self._host = host
self._port = port
super().__init__(pool, **server_kwargs)
async def _get_server_instance(self, client_connected_cb: ConnectedCallbackT, **kwargs) -> AbstractServer:
server = await start_server(client_connected_cb, self._host, self._port, **kwargs)
log.debug("Opened socket at %s:%s", self._host, self._port)
return server
def _final_callback(self) -> None:
log.debug("Closed socket at %s:%s", self._host, self._port)
class UnixControlServer(ControlServer):
"""Exposes a unix socket for control clients to connect to."""
_client_class = UnixControlClient
def __init__(self, pool: AnyTaskPoolT, socket_path: PathT, **server_kwargs) -> None:
"""`socket_path` is expected as a non-optional server argument."""
from asyncio.streams import start_unix_server
self._start_unix_server = start_unix_server
self._socket_path = Path(socket_path)
super().__init__(pool, **server_kwargs)
async def _get_server_instance(self, client_connected_cb: ConnectedCallbackT, **kwargs) -> AbstractServer:
server = await self._start_unix_server(client_connected_cb, self._socket_path, **kwargs)
log.debug("Opened socket '%s'", str(self._socket_path))
return server
def _final_callback(self) -> None:
"""Removes the unix socket on which the server was listening."""
self._socket_path.unlink()
log.debug("Removed socket '%s'", str(self._socket_path))

View File

@ -0,0 +1,200 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Definition of the :class:`ControlSession` used by a :class:`ControlServer`.
It should not be considered part of the public API.
"""
import logging
import json
from argparse import ArgumentError
from asyncio.streams import StreamReader, StreamWriter
from inspect import isfunction, signature
from io import StringIO
from typing import Callable, Optional, Union, TYPE_CHECKING
from .parser import ControlParser
from ..exceptions import CommandError, HelpRequested, ParserError
from ..pool import TaskPool, SimpleTaskPool
from ..internals.constants import CLIENT_INFO, CMD, CMD_OK
from ..internals.helpers import return_or_exception
if TYPE_CHECKING:
from .server import ControlServer
__all__ = ['ControlSession']
log = logging.getLogger(__name__)
class ControlSession:
"""
Manages a single control session between a server and a client.
The commands received from a connected client are translated into method calls on the task pool instance.
A subclass of the standard :class:`argparse.ArgumentParser` is used to handle the input read from the stream.
"""
def __init__(self, server: 'ControlServer', reader: StreamReader, writer: StreamWriter) -> None:
"""
Connection to the control server should already been established.
For more convenient/efficient access, some of the server's properties are saved in separate attributes.
The argument parser is _not_ instantiated in the constructor. It requires a bit of client information during
initialization, which is obtained in the `client_handshake` method; only there is the parser fully configured.
Args:
server:
The instance of a :class:`ControlServer` subclass starting the session.
reader:
The `asyncio.StreamReader` created when a client connected to the server.
writer:
The `asyncio.StreamWriter` created when a client connected to the server.
"""
self._control_server: 'ControlServer' = server
self._pool: Union[TaskPool, SimpleTaskPool] = server.pool
self._client_class_name = server.client_class_name
self._reader: StreamReader = reader
self._writer: StreamWriter = writer
self._parser: Optional[ControlParser] = None
self._response_buffer: StringIO = StringIO()
async def _exec_method_and_respond(self, method: Callable, **kwargs) -> None:
"""
Takes a pool method reference, executes it, and writes a response accordingly.
If the first parameter is named `self`, the method will be called with the `_pool` instance as its first
positional argument.
If it returns nothing, the response upon successful execution will be :const:`constants.CMD_OK`, otherwise the
response written to the stream will be its return value (as an encoded string).
Args:
prop:
The reference to the method defined on the `_pool` instance's class.
**kwargs (optional):
Must correspond to the arguments expected by the `method`.
Correctly unpacks arbitrary-length positional and keyword-arguments.
"""
log.debug("%s calls %s.%s", self._client_class_name, self._pool.__class__.__name__, method.__name__)
normal_pos, var_pos = [], []
for param in signature(method).parameters.values():
if param.name == 'self':
normal_pos.append(self._pool)
elif param.kind in (param.POSITIONAL_OR_KEYWORD, param.POSITIONAL_ONLY):
normal_pos.append(kwargs.pop(param.name))
elif param.kind == param.VAR_POSITIONAL:
var_pos = kwargs.pop(param.name)
output = await return_or_exception(method, *normal_pos, *var_pos, **kwargs)
self._response_buffer.write(CMD_OK.decode() if output is None else str(output))
async def _exec_property_and_respond(self, prop: property, **kwargs) -> None:
"""
Takes a pool property reference, executes its setter or getter, and writes a response accordingly.
The property set/get method will always be called with the `_pool` instance as its first positional argument.
Args:
prop:
The reference to the property defined on the `_pool` instance's class.
**kwargs (optional):
If not empty, the property setter is executed and the keyword arguments are passed along to it; the
response upon successful execution will be :const:`constants.CMD_OK`. Otherwise the property getter is
executed and the response written to the stream will be its return value (as an encoded string).
"""
if kwargs:
log.debug("%s sets %s.%s", self._client_class_name, self._pool.__class__.__name__, prop.fset.__name__)
await return_or_exception(prop.fset, self._pool, **kwargs)
self._response_buffer.write(CMD_OK.decode())
else:
log.debug("%s gets %s.%s", self._client_class_name, self._pool.__class__.__name__, prop.fget.__name__)
self._response_buffer.write(str(await return_or_exception(prop.fget, self._pool)))
async def client_handshake(self) -> None:
"""
Must be invoked before starting any other client interaction.
Client info is retrieved, server info is sent back, and the
:class:`ControlParser <asyncio_taskpool.control.parser.ControlParser>` is set up.
"""
msg = (await self._reader.readline()).decode().strip()
client_info = json.loads(msg)
log.debug("%s connected", self._client_class_name)
parser_kwargs = {
'stream': self._response_buffer,
CLIENT_INFO.TERMINAL_WIDTH: client_info[CLIENT_INFO.TERMINAL_WIDTH],
'prog': '',
'usage': f'[-h] [{CMD}] ...'
}
self._parser = ControlParser(**parser_kwargs)
self._parser.add_subparsers(title="Commands",
metavar="(A command followed by '-h' or '--help' will show command-specific help.)")
self._parser.add_class_commands(self._pool.__class__)
self._writer.write(str(self._pool).encode() + b'\n')
await self._writer.drain()
async def _parse_command(self, msg: str) -> None:
"""
Takes a message from the client and attempts to parse it.
If a parsing error occurs, it is returned to the client. If the :exc:`HelpRequested` exception was raised by the
:class:`ControlParser`, nothing else happens. Otherwise, the appropriate `_exec...` method is called with the
entire dictionary of keyword-arguments returned by the :class:`ControlParser` passed into it.
Args:
msg: The non-empty string read from the client stream.
"""
try:
kwargs = vars(self._parser.parse_args(msg.split(' ')))
except ArgumentError as e:
log.debug("%s got an ArgumentError", self._client_class_name)
self._response_buffer.write(str(e))
return
except (HelpRequested, ParserError):
log.debug("%s received usage help", self._client_class_name)
return
command = kwargs.pop(CMD)
if isfunction(command):
await self._exec_method_and_respond(command, **kwargs)
elif isinstance(command, property):
await self._exec_property_and_respond(command, **kwargs)
else:
self._response_buffer.write(str(CommandError(f"Unknown command object: {command}")))
async def listen(self) -> None:
"""
Enters the main control loop listening to client input.
This method only returns if either the server or the client disconnect.
Messages from the client are read, parsed, and turned into pool commands (if possible).
This method should be called, when the client connection was established and the handshake was successful.
It will obviously block indefinitely.
"""
while self._control_server.is_serving():
msg = (await self._reader.readline()).decode().strip()
if not msg:
log.debug("%s disconnected", self._client_class_name)
break
await self._parse_command(msg)
response = self._response_buffer.getvalue() + "\n"
self._response_buffer.seek(0)
self._response_buffer.truncate()
self._writer.write(response.encode())
await self._writer.drain()

View File

@ -1,3 +1,24 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Custom exception classes used in various modules.
"""
class PoolException(Exception): class PoolException(Exception):
pass pass
@ -6,6 +27,10 @@ class PoolIsClosed(PoolException):
pass pass
class PoolIsLocked(PoolException):
pass
class TaskEnded(PoolException): class TaskEnded(PoolException):
pass pass
@ -22,9 +47,25 @@ class InvalidTaskID(PoolException):
pass pass
class PoolStillOpen(PoolException): class InvalidGroupName(PoolException):
pass pass
class NotCoroutine(PoolException): class NotCoroutine(PoolException):
pass pass
class ServerException(Exception):
pass
class HelpRequested(ServerException):
pass
class ParserError(ServerException):
pass
class CommandError(ServerException):
pass

View File

@ -1,29 +0,0 @@
from asyncio.coroutines import iscoroutinefunction
from asyncio.queues import Queue
from typing import Any, Optional
from .types import T, AnyCallableT, ArgsT, KwArgsT
async def execute_optional(function: AnyCallableT, args: ArgsT = (), kwargs: KwArgsT = None) -> Optional[T]:
if not callable(function):
return
if kwargs is None:
kwargs = {}
if iscoroutinefunction(function):
return await function(*args, **kwargs)
return function(*args, **kwargs)
def star_function(function: AnyCallableT, arg: Any, arg_stars: int = 0) -> T:
if arg_stars == 0:
return function(arg)
if arg_stars == 1:
return function(*arg)
if arg_stars == 2:
return function(**arg)
raise ValueError(f"Invalid argument arg_stars={arg_stars}; must be 0, 1, or 2.")
async def join_queue(q: Queue) -> None:
await q.join()

View File

@ -0,0 +1,41 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Constants used by more than one module in the package.
This module should **not** be considered part of the public API.
"""
import sys
PACKAGE_NAME = 'asyncio_taskpool'
PYTHON_BEFORE_39 = sys.version_info[:2] < (3, 9)
DEFAULT_TASK_GROUP = 'default'
SESSION_MSG_BYTES = 1024 * 100
CMD = 'command'
CMD_OK = b"ok"
class CLIENT_INFO:
__slots__ = ()
TERMINAL_WIDTH = 'terminal_width'

View File

@ -0,0 +1,77 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Definition of :class:`TaskGroupRegister`.
It should not be considered part of the public API.
"""
from asyncio.locks import Lock
from collections.abc import MutableSet
from typing import Iterator, Set
class TaskGroupRegister(MutableSet):
"""
Combines the interface of a regular `set` with that of the `asyncio.Lock`.
Serves simultaneously as a container of IDs of tasks that belong to the same group, and as a mechanism for
preventing race conditions within a task group. The lock should be acquired before cancelling the entire group of
tasks, as well as before starting a task within the group.
"""
def __init__(self, *task_ids: int) -> None:
self._ids: Set[int] = set(task_ids)
self._lock = Lock()
def __contains__(self, task_id: int) -> bool:
"""Abstract method for the `MutableSet` base class."""
return task_id in self._ids
def __iter__(self) -> Iterator[int]:
"""Abstract method for the `MutableSet` base class."""
return iter(self._ids)
def __len__(self) -> int:
"""Abstract method for the `MutableSet` base class."""
return len(self._ids)
def add(self, task_id: int) -> None:
"""Abstract method for the `MutableSet` base class."""
self._ids.add(task_id)
def discard(self, task_id: int) -> None:
"""Abstract method for the `MutableSet` base class."""
self._ids.discard(task_id)
async def acquire(self) -> bool:
"""Wrapper around the lock's `acquire()` method."""
return await self._lock.acquire()
def release(self) -> None:
"""Wrapper around the lock's `release()` method."""
self._lock.release()
async def __aenter__(self) -> None:
"""Provides the asynchronous context manager syntax `async with ... :` when using the lock."""
await self._lock.acquire()
return None
async def __aexit__(self, exc_type, exc, tb) -> None:
"""Provides the asynchronous context manager syntax `async with ... :` when using the lock."""
self._lock.release()

View File

@ -0,0 +1,157 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Miscellaneous helper functions. None of these should be considered part of the public API.
"""
import builtins
from asyncio.coroutines import iscoroutinefunction
from importlib import import_module
from inspect import getdoc
from typing import Any, Callable, Optional, Type, Union
from .constants import PYTHON_BEFORE_39
from .types import T, AnyCallableT, ArgsT, KwArgsT
async def execute_optional(function: AnyCallableT, args: ArgsT = (), kwargs: KwArgsT = None) -> Optional[T]:
"""
Runs `function` with `args` and `kwargs` and returns its output.
Args:
function:
Any callable that accepts the provided positional and keyword-arguments.
If it is a coroutine function, it will be awaited.
If it is not a callable, nothing is returned.
*args (optional):
Positional arguments to pass to `function`.
**kwargs (optional):
Keyword-arguments to pass to `function`.
Returns:
Whatever `function` returns (possibly after being awaited) or `None` if `function` is not callable.
"""
if not callable(function):
return
if kwargs is None:
kwargs = {}
if iscoroutinefunction(function):
return await function(*args, **kwargs)
return function(*args, **kwargs)
def star_function(function: AnyCallableT, arg: Any, arg_stars: int = 0) -> T:
"""
Calls `function` passing `arg` to it, optionally unpacking it first.
Args:
function:
Any callable that accepts the provided argument(s).
arg:
The single positional argument that `function` expects; in this case `arg_stars` should be 0.
Or the iterable of positional arguments that `function` expects; in this case `arg_stars` should be 1.
Or the mapping of keyword-arguments that `function` expects; in this case `arg_stars` should be 2.
arg_stars (optional):
Determines if and how to unpack `arg`.
0 means no unpacking, i.e. `arg` is passed into `function` directly as `function(arg)`.
1 means unpacking to an arbitrary number of positional arguments, i.e. as `function(*arg)`.
2 means unpacking to an arbitrary number of keyword-arguments, i.e. as `function(**arg)`.
Returns:
Whatever `function` returns.
Raises:
`ValueError`: `arg_stars` is something other than 0, 1, or 2.
"""
if arg_stars == 0:
return function(arg)
if arg_stars == 1:
return function(*arg)
if arg_stars == 2:
return function(**arg)
raise ValueError(f"Invalid argument arg_stars={arg_stars}; must be 0, 1, or 2.")
def get_first_doc_line(obj: object) -> str:
"""Takes an object and returns the first (non-empty) line of its docstring."""
return getdoc(obj).strip().split("\n", 1)[0].strip()
async def return_or_exception(_function_to_execute: AnyCallableT, *args, **kwargs) -> Union[T, Exception]:
"""
Returns the output of a function or the exception thrown during its execution.
Args:
_function_to_execute:
Any callable that accepts the provided positional and keyword-arguments.
*args (optional):
Positional arguments to pass to `_function_to_execute`.
**kwargs (optional):
Keyword-arguments to pass to `_function_to_execute`.
Returns:
Whatever `_function_to_execute` returns or throws. (An exception is not raised, but returned!)
"""
try:
if iscoroutinefunction(_function_to_execute):
return await _function_to_execute(*args, **kwargs)
else:
return _function_to_execute(*args, **kwargs)
except Exception as e:
return e
def resolve_dotted_path(dotted_path: str) -> object:
"""
Resolves a dotted path to a global object and returns that object.
Algorithm shamelessly stolen from the `logging.config` module from the standard library.
"""
names = dotted_path.split('.')
module_name = names.pop(0)
found = import_module(module_name)
for name in names:
try:
found = getattr(found, name)
except AttributeError:
module_name += f'.{name}'
import_module(module_name)
found = getattr(found, name)
return found
class ClassMethodWorkaround:
"""Dirty workaround to make the `@classmethod` decorator work with properties."""
def __init__(self, method_or_property: Union[Callable, property]) -> None:
if isinstance(method_or_property, property):
self._getter = method_or_property.fget
else:
self._getter = method_or_property
def __get__(self, obj: Union[T, None], cls: Union[Type[T], None]) -> Any:
if obj is None:
return self._getter(cls)
return self._getter(obj)
# Starting with Python 3.9, this is thankfully no longer necessary.
if PYTHON_BEFORE_39:
classmethod = ClassMethodWorkaround
else:
classmethod = builtins.classmethod

View File

@ -0,0 +1,43 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Custom type definitions used in various modules.
This module should **not** be considered part of the public API.
"""
from asyncio.streams import StreamReader, StreamWriter
from pathlib import Path
from typing import Any, Awaitable, Callable, Coroutine, Iterable, Mapping, Tuple, TypeVar, Union
T = TypeVar('T')
ArgsT = Iterable[Any]
KwArgsT = Mapping[str, Any]
AnyCallableT = Callable[..., Union[T, Awaitable[T]]]
CoroutineFunc = Callable[..., Coroutine]
EndCB = Callable
CancelCB = Callable
ConnectedCallbackT = Callable[[StreamReader, StreamWriter], Awaitable[None]]
ClientConnT = Union[Tuple[StreamReader, StreamWriter], Tuple[None, None]]
PathT = Union[Path, str]

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,66 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Definition of an :code:`asyncio.Queue` subclass with some small additions.
"""
from asyncio.queues import Queue as _Queue
from typing import Any
__all__ = ['Queue']
class Queue(_Queue):
"""
Adds a little syntactic sugar to the :code:`asyncio.Queue`.
Allows being used as an async context manager awaiting `get` upon entering the context and calling
:meth:`item_processed` upon exiting it.
"""
def item_processed(self) -> None:
"""
Does exactly the same as :meth:`asyncio.Queue.task_done`.
This method exists because `task_done` is an atrocious name for the method. It communicates the wrong thing,
invites confusion, and immensely reduces readability (in the context of this library). And readability counts.
"""
self.task_done()
async def __aenter__(self) -> Any:
"""
Implements an asynchronous context manager for the queue.
Upon entering :meth:`get` is awaited and subsequently whatever came out of the queue is returned.
It allows writing code this way:
>>> queue = Queue()
>>> ...
>>> async with queue as item:
>>> ...
"""
return await self.get()
async def __aexit__(self, exc_type, exc_val, exc_tb) -> None:
"""
Implements an asynchronous context manager for the queue.
Upon exiting :meth:`item_processed` is called. This is why this context manager may not always be what you want,
but in some situations it makes the code much cleaner.
"""
self.item_processed()

View File

@ -1,130 +0,0 @@
import logging
from abc import ABC, abstractmethod
from asyncio import AbstractServer
from asyncio.exceptions import CancelledError
from asyncio.streams import StreamReader, StreamWriter, start_unix_server
from asyncio.tasks import Task, create_task
from pathlib import Path
from typing import Tuple, Union, Optional
from . import constants
from .pool import SimpleTaskPool
from .client import ControlClient, UnixControlClient
log = logging.getLogger(__name__)
def tasks_str(num: int) -> str:
return "tasks" if num != 1 else "task"
def get_cmd_arg(msg: str) -> Union[Tuple[str, Optional[int]], Tuple[None, None]]:
cmd = msg.strip().split(' ', 1)
if len(cmd) > 1:
try:
return cmd[0], int(cmd[1])
except ValueError:
return None, None
return cmd[0], None
class ControlServer(ABC): # TODO: Implement interface for normal TaskPool instances, not just SimpleTaskPool
client_class = ControlClient
@abstractmethod
async def get_server_instance(self, client_connected_cb, **kwargs) -> AbstractServer:
raise NotImplementedError
@abstractmethod
def final_callback(self) -> None:
raise NotImplementedError
def __init__(self, pool: SimpleTaskPool, **server_kwargs) -> None:
self._pool: SimpleTaskPool = pool
self._server_kwargs = server_kwargs
self._server: Optional[AbstractServer] = None
async def _start_tasks(self, writer: StreamWriter, num: int = None) -> None:
if num is None:
num = 1
log.debug("%s requests starting %s %s", self.client_class.__name__, num, tasks_str(num))
writer.write(str(await self._pool.start(num)).encode())
def _stop_tasks(self, writer: StreamWriter, num: int = None) -> None:
if num is None:
num = 1
log.debug("%s requests stopping %s %s", self.client_class.__name__, num, tasks_str(num))
# the requested number may be greater than the total number of running tasks
writer.write(str(self._pool.stop(num)).encode())
def _stop_all_tasks(self, writer: StreamWriter) -> None:
log.debug("%s requests stopping all tasks", self.client_class.__name__)
writer.write(str(self._pool.stop_all()).encode())
def _pool_size(self, writer: StreamWriter) -> None:
log.debug("%s requests pool size", self.client_class.__name__)
writer.write(str(self._pool.size).encode())
def _pool_func(self, writer: StreamWriter) -> None:
log.debug("%s requests pool function", self.client_class.__name__)
writer.write(self._pool.func_name.encode())
async def _listen(self, reader: StreamReader, writer: StreamWriter) -> None:
while self._server.is_serving():
msg = (await reader.read(constants.MSG_BYTES)).decode().strip()
if not msg:
log.debug("%s disconnected", self.client_class.__name__)
break
cmd, arg = get_cmd_arg(msg)
if cmd == constants.CMD_START:
await self._start_tasks(writer, arg)
elif cmd == constants.CMD_STOP:
self._stop_tasks(writer, arg)
elif cmd == constants.CMD_STOP_ALL:
self._stop_all_tasks(writer)
elif cmd == constants.CMD_SIZE:
self._pool_size(writer)
elif cmd == constants.CMD_FUNC:
self._pool_func(writer)
else:
log.debug("%s sent invalid command: %s", self.client_class.__name__, msg)
writer.write(b"Invalid command!")
await writer.drain()
async def _client_connected_cb(self, reader: StreamReader, writer: StreamWriter) -> None:
log.debug("%s connected", self.client_class.__name__)
writer.write(str(self._pool).encode())
await writer.drain()
await self._listen(reader, writer)
async def _serve_forever(self) -> None:
try:
async with self._server:
await self._server.serve_forever()
except CancelledError:
log.debug("%s stopped", self.__class__.__name__)
finally:
self.final_callback()
async def serve_forever(self) -> Task:
log.debug("Starting %s...", self.__class__.__name__)
self._server = await self.get_server_instance(self._client_connected_cb, **self._server_kwargs)
return create_task(self._serve_forever())
class UnixControlServer(ControlServer):
client_class = UnixControlClient
def __init__(self, pool: SimpleTaskPool, **server_kwargs) -> None:
self._socket_path = Path(server_kwargs.pop('path'))
super().__init__(pool, **server_kwargs)
async def get_server_instance(self, client_connected_cb, **kwargs) -> AbstractServer:
srv = await start_unix_server(client_connected_cb, self._socket_path, **kwargs)
log.debug("Opened socket '%s'", str(self._socket_path))
return srv
def final_callback(self) -> None:
self._socket_path.unlink()
log.debug("Removed socket '%s'", str(self._socket_path))

View File

@ -1,16 +0,0 @@
from asyncio.streams import StreamReader, StreamWriter
from typing import Any, Awaitable, Callable, Iterable, Mapping, Tuple, TypeVar, Union
T = TypeVar('T')
ArgsT = Iterable[Any]
KwArgsT = Mapping[str, Any]
AnyCallableT = Callable[[...], Union[Awaitable[T], T]]
CoroutineFunc = Callable[[...], Awaitable[Any]]
EndCallbackT = Callable
CancelCallbackT = Callable
ClientConnT = Union[Tuple[StreamReader, StreamWriter], Tuple[None, None]]

30
tests/__main__.py Normal file
View File

@ -0,0 +1,30 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Main entry point for all unit tests.
"""
import sys
import unittest
if __name__ == '__main__':
test_suite = unittest.defaultTestLoader.discover('.')
test_runner = unittest.TextTestRunner(resultclass=unittest.TextTestResult)
result = test_runner.run(test_suite)
sys.exit(not result.wasSuccessful())

View File

View File

@ -0,0 +1,45 @@
from pathlib import Path
from unittest import IsolatedAsyncioTestCase
from unittest.mock import AsyncMock, MagicMock, patch
from asyncio_taskpool.control.client import TCPControlClient, UnixControlClient
from asyncio_taskpool.control import __main__ as module
class CLITestCase(IsolatedAsyncioTestCase):
def test_parse_cli(self):
socket_path = '/some/path/to.sock'
args = [module.UNIX, socket_path]
expected_kwargs = {
module.CLIENT_CLASS: UnixControlClient,
module.SOCKET_PATH: Path(socket_path)
}
parsed_kwargs = module.parse_cli(args)
self.assertDictEqual(expected_kwargs, parsed_kwargs)
host, port = '1.2.3.4', '1234'
args = [module.TCP, host, port]
expected_kwargs = {
module.CLIENT_CLASS: TCPControlClient,
module.HOST: host,
module.PORT: int(port)
}
parsed_kwargs = module.parse_cli(args)
self.assertDictEqual(expected_kwargs, parsed_kwargs)
with patch('sys.stderr'):
with self.assertRaises(SystemExit):
module.parse_cli(['invalid', 'foo', 'bar'])
@patch.object(module, 'parse_cli')
async def test_main(self, mock_parse_cli: MagicMock):
mock_client_start = AsyncMock()
mock_client = MagicMock(start=mock_client_start)
mock_client_cls = MagicMock(return_value=mock_client)
mock_client_kwargs = {'foo': 123, 'bar': 456, 'baz': 789}
mock_parse_cli.return_value = {module.CLIENT_CLASS: mock_client_cls, **mock_client_kwargs}
self.assertIsNone(await module.main())
mock_parse_cli.assert_called_once_with()
mock_client_cls.assert_called_once_with(**mock_client_kwargs)
mock_client_start.assert_awaited_once_with()

View File

@ -0,0 +1,249 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.client` module.
"""
import json
import os
import shutil
import sys
from pathlib import Path
from unittest import IsolatedAsyncioTestCase, skipIf
from unittest.mock import AsyncMock, MagicMock, call, patch
from asyncio_taskpool.control import client
from asyncio_taskpool.internals.constants import CLIENT_INFO, SESSION_MSG_BYTES
FOO, BAR = 'foo', 'bar'
class ControlClientTestCase(IsolatedAsyncioTestCase):
def setUp(self) -> None:
self.abstract_patcher = patch('asyncio_taskpool.control.client.ControlClient.__abstractmethods__', set())
self.print_patcher = patch.object(client, 'print')
self.mock_abstract_methods = self.abstract_patcher.start()
self.mock_print = self.print_patcher.start()
self.kwargs = {FOO: 123, BAR: 456}
self.client = client.ControlClient(**self.kwargs)
self.mock_read = AsyncMock(return_value=FOO.encode())
self.mock_write, self.mock_drain = MagicMock(), AsyncMock()
self.mock_reader = MagicMock(read=self.mock_read)
self.mock_writer = MagicMock(write=self.mock_write, drain=self.mock_drain)
def tearDown(self) -> None:
self.abstract_patcher.stop()
self.print_patcher.stop()
def test_client_info(self):
self.assertEqual({CLIENT_INFO.TERMINAL_WIDTH: shutil.get_terminal_size().columns},
client.ControlClient._client_info())
async def test_abstract(self):
with self.assertRaises(NotImplementedError):
await self.client._open_connection(**self.kwargs)
def test_init(self):
self.assertEqual(self.kwargs, self.client._conn_kwargs)
self.assertFalse(self.client._connected)
@patch.object(client.ControlClient, '_client_info')
async def test__server_handshake(self, mock__client_info: MagicMock):
mock__client_info.return_value = mock_info = {FOO: 1, BAR: 9999}
self.assertIsNone(await self.client._server_handshake(self.mock_reader, self.mock_writer))
self.assertTrue(self.client._connected)
mock__client_info.assert_called_once_with()
self.mock_write.assert_has_calls([call(json.dumps(mock_info).encode()), call(b'\n')])
self.mock_drain.assert_awaited_once_with()
self.mock_read.assert_awaited_once_with(SESSION_MSG_BYTES)
self.mock_print.assert_has_calls([
call("Connected to", self.mock_read.return_value.decode()),
call("Type '-h' to get help and usage instructions for all available commands.\n")
])
@patch.object(client, 'input')
def test__get_command(self, mock_input: MagicMock):
self.client._connected = True
mock_input.return_value = ' ' + FOO.upper() + ' '
mock_close = MagicMock()
mock_writer = MagicMock(close=mock_close)
output = self.client._get_command(mock_writer)
self.assertEqual(FOO, output)
mock_input.assert_called_once()
mock_close.assert_not_called()
self.assertTrue(self.client._connected)
mock_input.reset_mock()
mock_input.side_effect = KeyboardInterrupt
self.assertIsNone(self.client._get_command(mock_writer))
mock_input.assert_called_once()
mock_close.assert_not_called()
self.assertTrue(self.client._connected)
mock_input.reset_mock()
mock_input.side_effect = EOFError
self.assertIsNone(self.client._get_command(mock_writer))
mock_input.assert_called_once()
mock_close.assert_called_once()
self.assertFalse(self.client._connected)
@patch.object(client.ControlClient, '_get_command')
async def test__interact(self, mock__get_command: MagicMock):
self.client._connected = True
mock__get_command.return_value = None
self.assertIsNone(await self.client._interact(self.mock_reader, self.mock_writer))
self.mock_write.assert_not_called()
self.mock_drain.assert_not_awaited()
self.mock_read.assert_not_awaited()
self.mock_print.assert_not_called()
self.assertTrue(self.client._connected)
mock__get_command.return_value = cmd = FOO + BAR + ' 123'
self.mock_drain.side_effect = err = ConnectionError()
self.assertIsNone(await self.client._interact(self.mock_reader, self.mock_writer))
self.mock_write.assert_has_calls([call(cmd.encode()), call(b'\n')])
self.mock_drain.assert_awaited_once_with()
self.mock_read.assert_not_awaited()
self.mock_print.assert_called_once_with(err, file=sys.stderr)
self.assertFalse(self.client._connected)
self.client._connected = True
self.mock_write.reset_mock()
self.mock_drain.reset_mock(side_effect=True)
self.mock_print.reset_mock()
self.assertIsNone(await self.client._interact(self.mock_reader, self.mock_writer))
self.mock_write.assert_has_calls([call(cmd.encode()), call(b'\n')])
self.mock_drain.assert_awaited_once_with()
self.mock_read.assert_awaited_once_with(SESSION_MSG_BYTES)
self.mock_print.assert_called_once_with(FOO)
self.assertTrue(self.client._connected)
@patch.object(client.ControlClient, '_interact')
@patch.object(client.ControlClient, '_server_handshake')
@patch.object(client.ControlClient, '_open_connection')
async def test_start(self, mock__open_connection: AsyncMock, mock__server_handshake: AsyncMock,
mock__interact: AsyncMock):
mock__open_connection.return_value = None, None
self.assertIsNone(await self.client.start())
mock__open_connection.assert_awaited_once_with(**self.kwargs)
mock__server_handshake.assert_not_awaited()
mock__interact.assert_not_awaited()
self.mock_print.assert_called_once_with("Failed to connect.", file=sys.stderr)
mock__open_connection.reset_mock()
self.mock_print.reset_mock()
mock__open_connection.return_value = self.mock_reader, self.mock_writer
self.assertIsNone(await self.client.start())
mock__open_connection.assert_awaited_once_with(**self.kwargs)
mock__server_handshake.assert_awaited_once_with(self.mock_reader, self.mock_writer)
mock__interact.assert_not_awaited()
self.mock_print.assert_called_once_with("Disconnected from control server.")
mock__open_connection.reset_mock()
mock__server_handshake.reset_mock()
self.mock_print.reset_mock()
self.client._connected = True
def disconnect(*_args, **_kwargs) -> None: self.client._connected = False
mock__interact.side_effect = disconnect
self.assertIsNone(await self.client.start())
mock__open_connection.assert_awaited_once_with(**self.kwargs)
mock__server_handshake.assert_awaited_once_with(self.mock_reader, self.mock_writer)
mock__interact.assert_awaited_once_with(self.mock_reader, self.mock_writer)
self.mock_print.assert_called_once_with("Disconnected from control server.")
class TCPControlClientTestCase(IsolatedAsyncioTestCase):
def setUp(self) -> None:
self.base_init_patcher = patch.object(client.ControlClient, '__init__')
self.mock_base_init = self.base_init_patcher.start()
self.host, self.port = 'localhost', 12345
self.kwargs = {FOO: 123, BAR: 456}
self.client = client.TCPControlClient(host=self.host, port=self.port, **self.kwargs)
def tearDown(self) -> None:
self.base_init_patcher.stop()
def test_init(self):
self.assertEqual(self.host, self.client._host)
self.assertEqual(self.port, self.client._port)
self.mock_base_init.assert_called_once_with(**self.kwargs)
@patch.object(client, 'print')
@patch.object(client, 'open_connection')
async def test__open_connection(self, mock_open_connection: AsyncMock, mock_print: MagicMock):
mock_open_connection.return_value = expected_output = 'something'
kwargs = {'a': 1, 'b': 2}
output = await self.client._open_connection(**kwargs)
self.assertEqual(expected_output, output)
mock_open_connection.assert_awaited_once_with(self.host, self.port, **kwargs)
mock_print.assert_not_called()
mock_open_connection.reset_mock()
mock_open_connection.side_effect = e = ConnectionError()
output1, output2 = await self.client._open_connection(**kwargs)
self.assertIsNone(output1)
self.assertIsNone(output2)
mock_open_connection.assert_awaited_once_with(self.host, self.port, **kwargs)
mock_print.assert_called_once_with(str(e), file=sys.stderr)
@skipIf(os.name == 'nt', "No Unix sockets on Windows :(")
class UnixControlClientTestCase(IsolatedAsyncioTestCase):
def setUp(self) -> None:
self.base_init_patcher = patch.object(client.ControlClient, '__init__')
self.mock_base_init = self.base_init_patcher.start()
self.path = '/tmp/asyncio_taskpool'
self.kwargs = {FOO: 123, BAR: 456}
self.client = client.UnixControlClient(socket_path=self.path, **self.kwargs)
def tearDown(self) -> None:
self.base_init_patcher.stop()
def test_init(self):
self.assertEqual(Path(self.path), self.client._socket_path)
self.mock_base_init.assert_called_once_with(**self.kwargs)
@patch.object(client, 'print')
async def test__open_connection(self, mock_print: MagicMock):
expected_output = 'something'
self.client._open_unix_connection = mock_open_unix_connection = AsyncMock(return_value=expected_output)
kwargs = {'a': 1, 'b': 2}
output = await self.client._open_connection(**kwargs)
self.assertEqual(expected_output, output)
mock_open_unix_connection.assert_awaited_once_with(Path(self.path), **kwargs)
mock_print.assert_not_called()
mock_open_unix_connection.reset_mock()
mock_open_unix_connection.side_effect = FileNotFoundError
output1, output2 = await self.client._open_connection(**kwargs)
self.assertIsNone(output1)
self.assertIsNone(output2)
mock_open_unix_connection.assert_awaited_once_with(Path(self.path), **kwargs)
mock_print.assert_called_once_with("No socket at", Path(self.path), file=sys.stderr)

View File

@ -0,0 +1,311 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.control.parser` module.
"""
from argparse import ArgumentParser, HelpFormatter, ArgumentDefaultsHelpFormatter, RawTextHelpFormatter, SUPPRESS
from ast import literal_eval
from inspect import signature
from unittest import TestCase
from unittest.mock import MagicMock, call, patch
from typing import Iterable
from asyncio_taskpool.control import parser
from asyncio_taskpool.exceptions import HelpRequested, ParserError
from asyncio_taskpool.internals.helpers import resolve_dotted_path
from asyncio_taskpool.internals.types import ArgsT, CancelCB, CoroutineFunc, EndCB, KwArgsT
FOO, BAR = 'foo', 'bar'
class ControlParserTestCase(TestCase):
def setUp(self) -> None:
self.help_formatter_factory_patcher = patch.object(parser.ControlParser, 'help_formatter_factory')
self.mock_help_formatter_factory = self.help_formatter_factory_patcher.start()
self.mock_help_formatter_factory.return_value = RawTextHelpFormatter
self.stream, self.terminal_width = MagicMock(), 420
self.kwargs = {
'stream': self.stream,
'terminal_width': self.terminal_width,
'formatter_class': FOO
}
self.parser = parser.ControlParser(**self.kwargs)
def tearDown(self) -> None:
self.help_formatter_factory_patcher.stop()
def test_help_formatter_factory(self):
self.help_formatter_factory_patcher.stop()
class MockBaseClass(HelpFormatter):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
terminal_width = 123456789
cls = parser.ControlParser.help_formatter_factory(terminal_width, MockBaseClass)
self.assertTrue(issubclass(cls, MockBaseClass))
instance = cls('prog')
self.assertEqual(terminal_width, getattr(instance, '_width'))
cls = parser.ControlParser.help_formatter_factory(terminal_width)
self.assertTrue(issubclass(cls, ArgumentDefaultsHelpFormatter))
instance = cls('prog')
self.assertEqual(terminal_width, getattr(instance, '_width'))
def test_init(self):
self.assertIsInstance(self.parser, ArgumentParser)
self.assertEqual(self.stream, self.parser._stream)
self.assertEqual(self.terminal_width, self.parser._terminal_width)
self.mock_help_formatter_factory.assert_called_once_with(self.terminal_width, FOO)
self.assertEqual(RawTextHelpFormatter, getattr(self.parser, 'formatter_class'))
self.assertSetEqual(set(), self.parser._flags)
self.assertIsNone(self.parser._commands)
@patch.object(parser, 'get_first_doc_line')
def test_add_function_command(self, mock_get_first_doc_line: MagicMock):
def foo_bar(): pass
mock_subparser = MagicMock()
mock_add_parser = MagicMock(return_value=mock_subparser)
self.parser._commands = MagicMock(add_parser=mock_add_parser)
mock_get_first_doc_line.return_value = mock_help = 'help 123'
kwargs = {FOO: 1, BAR: 2, parser.DESCRIPTION: FOO + BAR}
expected_name = 'foo-bar'
expected_kwargs = {parser.NAME: expected_name, parser.PROG: expected_name, parser.HELP: mock_help, **kwargs}
to_omit = ['abc', 'xyz']
output = self.parser.add_function_command(foo_bar, omit_params=to_omit, **kwargs)
self.assertEqual(mock_subparser, output)
mock_add_parser.assert_called_once_with(**expected_kwargs)
mock_subparser.add_function_args.assert_called_once_with(foo_bar, to_omit)
@patch.object(parser, 'get_first_doc_line')
def test_add_property_command(self, mock_get_first_doc_line: MagicMock):
def get_prop(_self): pass
def set_prop(_self, _value): pass
prop = property(get_prop)
mock_subparser = MagicMock()
mock_add_parser = MagicMock(return_value=mock_subparser)
self.parser._commands = MagicMock(add_parser=mock_add_parser)
mock_get_first_doc_line.return_value = mock_help = 'help 123'
kwargs = {FOO: 1, BAR: 2, parser.DESCRIPTION: FOO + BAR}
expected_name = 'get-prop'
expected_kwargs = {parser.NAME: expected_name, parser.PROG: expected_name, parser.HELP: mock_help, **kwargs}
output = self.parser.add_property_command(prop, **kwargs)
self.assertEqual(mock_subparser, output)
mock_get_first_doc_line.assert_called_once_with(get_prop)
mock_add_parser.assert_called_once_with(**expected_kwargs)
mock_subparser.add_function_arg.assert_not_called()
mock_get_first_doc_line.reset_mock()
mock_add_parser.reset_mock()
prop = property(get_prop, set_prop)
expected_help = f"Get/set the `.{expected_name}` property"
expected_kwargs = {parser.NAME: expected_name, parser.PROG: expected_name, parser.HELP: expected_help, **kwargs}
output = self.parser.add_property_command(prop, **kwargs)
self.assertEqual(mock_subparser, output)
mock_get_first_doc_line.assert_has_calls([call(get_prop), call(set_prop)])
mock_add_parser.assert_called_once_with(**expected_kwargs)
mock_subparser.add_function_arg.assert_called_once_with(
tuple(signature(set_prop).parameters.values())[1],
nargs='?',
default=SUPPRESS,
help=f"If provided: {mock_help} If omitted: {mock_help}"
)
@patch.object(parser.ControlParser, 'add_property_command')
@patch.object(parser.ControlParser, 'add_function_command')
def test_add_class_commands(self, mock_add_function_command: MagicMock, mock_add_property_command: MagicMock):
class FooBar:
some_attribute = None
def _protected(self, _): pass
def __private(self, _): pass
def to_omit(self, _): pass
def method(self, _): pass
@property
def prop(self): return None
mock_set_defaults = MagicMock()
mock_subparser = MagicMock(set_defaults=mock_set_defaults)
mock_add_function_command.return_value = mock_add_property_command.return_value = mock_subparser
x = 'x'
common_kwargs = {'stream': self.parser._stream, parser.CLIENT_INFO.TERMINAL_WIDTH: self.parser._terminal_width}
expected_output = {'method': mock_subparser, 'prop': mock_subparser}
output = self.parser.add_class_commands(FooBar, public_only=True, omit_members=['to_omit'], member_arg_name=x)
self.assertDictEqual(expected_output, output)
mock_add_function_command.assert_called_once_with(FooBar.method, **common_kwargs)
mock_add_property_command.assert_called_once_with(FooBar.prop, FooBar.__name__, **common_kwargs)
mock_set_defaults.assert_has_calls([call(**{x: FooBar.method}), call(**{x: FooBar.prop})])
@patch.object(parser.ArgumentParser, 'add_subparsers')
def test_add_subparsers(self, mock_base_add_subparsers: MagicMock):
args, kwargs = [1, 2, 42], {FOO: 123, BAR: 456}
mock_base_add_subparsers.return_value = mock_action = MagicMock()
output = self.parser.add_subparsers(*args, **kwargs)
self.assertEqual(mock_action, output)
mock_base_add_subparsers.assert_called_once_with(*args, **kwargs)
def test__print_message(self):
self.stream.write = MagicMock()
self.assertIsNone(self.parser._print_message(''))
self.stream.write.assert_not_called()
msg = 'foo bar baz'
self.assertIsNone(self.parser._print_message(msg))
self.stream.write.assert_called_once_with(msg)
@patch.object(parser.ControlParser, '_print_message')
def test_exit(self, mock__print_message: MagicMock):
self.assertIsNone(self.parser.exit(123, ''))
mock__print_message.assert_not_called()
msg = 'foo bar baz'
self.assertIsNone(self.parser.exit(123, msg))
mock__print_message.assert_called_once_with(msg)
@patch.object(parser.ArgumentParser, 'error')
def test_error(self, mock_supercls_error: MagicMock):
with self.assertRaises(ParserError):
self.parser.error(FOO + BAR)
mock_supercls_error.assert_called_once_with(message=FOO + BAR)
@patch.object(parser.ArgumentParser, 'print_help')
def test_print_help(self, mock_print_help: MagicMock):
arg = MagicMock()
with self.assertRaises(HelpRequested):
self.parser.print_help(arg)
mock_print_help.assert_called_once_with(arg)
@patch.object(parser, '_get_type_from_annotation')
@patch.object(parser.ArgumentParser, 'add_argument')
def test_add_function_arg(self, mock_add_argument: MagicMock, mock__get_type_from_annotation: MagicMock):
mock_add_argument.return_value = expected_output = 'action'
mock__get_type_from_annotation.return_value = mock_type = 'fake'
foo_type, args_type, bar_type, baz_type, boo_type = tuple, str, int, float, complex
bar_default, baz_default, boo_default = 1, 0.1, 1j
def func(foo: foo_type, *args: args_type, bar: bar_type = bar_default, baz: baz_type = baz_default,
boo: boo_type = boo_default, flag: bool = False):
return foo, args, bar, baz, boo, flag
param_foo, param_args, param_bar, param_baz, param_boo, param_flag = signature(func).parameters.values()
kwargs = {FOO + BAR: 'xyz'}
self.assertEqual(expected_output, self.parser.add_function_arg(param_foo, **kwargs))
mock_add_argument.assert_called_once_with('foo', type=mock_type, **kwargs)
mock__get_type_from_annotation.assert_called_once_with(foo_type)
mock_add_argument.reset_mock()
mock__get_type_from_annotation.reset_mock()
self.assertEqual(expected_output, self.parser.add_function_arg(param_args, **kwargs))
mock_add_argument.assert_called_once_with('args', nargs='*', type=mock_type, **kwargs)
mock__get_type_from_annotation.assert_called_once_with(args_type)
mock_add_argument.reset_mock()
mock__get_type_from_annotation.reset_mock()
self.assertEqual(expected_output, self.parser.add_function_arg(param_bar, **kwargs))
mock_add_argument.assert_called_once_with('-b', '--bar', default=bar_default, type=mock_type, **kwargs)
mock__get_type_from_annotation.assert_called_once_with(bar_type)
mock_add_argument.reset_mock()
mock__get_type_from_annotation.reset_mock()
self.assertEqual(expected_output, self.parser.add_function_arg(param_baz, **kwargs))
mock_add_argument.assert_called_once_with('-B', '--baz', default=baz_default, type=mock_type, **kwargs)
mock__get_type_from_annotation.assert_called_once_with(baz_type)
mock_add_argument.reset_mock()
mock__get_type_from_annotation.reset_mock()
self.assertEqual(expected_output, self.parser.add_function_arg(param_boo, **kwargs))
mock_add_argument.assert_called_once_with('--boo', default=boo_default, type=mock_type, **kwargs)
mock__get_type_from_annotation.assert_called_once_with(boo_type)
mock_add_argument.reset_mock()
mock__get_type_from_annotation.reset_mock()
self.assertEqual(expected_output, self.parser.add_function_arg(param_flag, **kwargs))
mock_add_argument.assert_called_once_with('-f', '--flag', action='store_true', **kwargs)
mock__get_type_from_annotation.assert_not_called()
@patch.object(parser.ControlParser, 'add_function_arg')
def test_add_function_args(self, mock_add_function_arg: MagicMock):
def func(foo: str, *args: int, bar: float = 0.1):
return foo, args, bar
_, param_args, param_bar = signature(func).parameters.values()
self.assertIsNone(self.parser.add_function_args(func, omit=['foo']))
mock_add_function_arg.assert_has_calls([
call(param_args, help=repr(param_args.annotation)),
call(param_bar, help=repr(param_bar.annotation)),
])
class RestTestCase(TestCase):
log_lvl: int
@classmethod
def setUpClass(cls) -> None:
cls.log_lvl = parser.log.level
parser.log.setLevel(999)
@classmethod
def tearDownClass(cls) -> None:
parser.log.setLevel(cls.log_lvl)
def test__get_arg_type_wrapper(self):
type_wrap = parser._get_arg_type_wrapper(int)
self.assertEqual('int', type_wrap.__name__)
self.assertEqual(SUPPRESS, type_wrap(SUPPRESS))
self.assertEqual(13, type_wrap('13'))
name = 'abcdef'
mock_type = MagicMock(side_effect=[parser.ArgumentTypeError, TypeError, ValueError, Exception], __name__=name)
type_wrap = parser._get_arg_type_wrapper(mock_type)
self.assertEqual(name, type_wrap.__name__)
with self.assertRaises(parser.ArgumentTypeError):
type_wrap(FOO)
with self.assertRaises(TypeError):
type_wrap(FOO)
with self.assertRaises(ValueError):
type_wrap(FOO)
with self.assertRaises(parser.ArgumentTypeError):
type_wrap(FOO)
@patch.object(parser, '_get_arg_type_wrapper')
def test__get_type_from_annotation(self, mock__get_arg_type_wrapper: MagicMock):
mock__get_arg_type_wrapper.return_value = expected_output = FOO + BAR
dotted_path_ann = [CoroutineFunc, EndCB, CancelCB]
literal_eval_ann = [ArgsT, KwArgsT, Iterable[ArgsT], Iterable[KwArgsT]]
any_other_ann = MagicMock()
for a in dotted_path_ann:
self.assertEqual(expected_output, parser._get_type_from_annotation(a))
mock__get_arg_type_wrapper.assert_has_calls(len(dotted_path_ann) * [call(resolve_dotted_path)])
mock__get_arg_type_wrapper.reset_mock()
for a in literal_eval_ann:
self.assertEqual(expected_output, parser._get_type_from_annotation(a))
mock__get_arg_type_wrapper.assert_has_calls(len(literal_eval_ann) * [call(literal_eval)])
mock__get_arg_type_wrapper.reset_mock()
self.assertEqual(expected_output, parser._get_type_from_annotation(any_other_ann))
mock__get_arg_type_wrapper.assert_called_once_with(any_other_ann)

View File

@ -0,0 +1,210 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.server` module.
"""
import asyncio
import logging
import os
from pathlib import Path
from unittest import IsolatedAsyncioTestCase, skipIf
from unittest.mock import AsyncMock, MagicMock, patch
from asyncio_taskpool.control import server
from asyncio_taskpool.control.client import ControlClient, TCPControlClient, UnixControlClient
FOO, BAR = 'foo', 'bar'
class ControlServerTestCase(IsolatedAsyncioTestCase):
log_lvl: int
@classmethod
def setUpClass(cls) -> None:
cls.log_lvl = server.log.level
server.log.setLevel(999)
@classmethod
def tearDownClass(cls) -> None:
server.log.setLevel(cls.log_lvl)
def setUp(self) -> None:
self.abstract_patcher = patch('asyncio_taskpool.control.server.ControlServer.__abstractmethods__', set())
self.mock_abstract_methods = self.abstract_patcher.start()
self.mock_pool = MagicMock()
self.kwargs = {FOO: 123, BAR: 456}
self.server = server.ControlServer(pool=self.mock_pool, **self.kwargs)
def tearDown(self) -> None:
self.abstract_patcher.stop()
def test_client_class_name(self):
self.assertEqual(ControlClient.__name__, server.ControlServer.client_class_name)
async def test_abstract(self):
with self.assertRaises(NotImplementedError):
args = [AsyncMock()]
await self.server._get_server_instance(*args)
with self.assertRaises(NotImplementedError):
self.server._final_callback()
def test_init(self):
self.assertEqual(self.mock_pool, self.server._pool)
self.assertEqual(self.kwargs, self.server._server_kwargs)
self.assertIsNone(self.server._server)
def test_pool(self):
self.assertEqual(self.mock_pool, self.server.pool)
def test_is_serving(self):
self.server._server = MagicMock(is_serving=MagicMock(return_value=FOO + BAR))
self.assertEqual(FOO + BAR, self.server.is_serving())
@patch.object(server, 'ControlSession')
async def test__client_connected_cb(self, mock_client_session_cls: MagicMock):
mock_client_handshake, mock_listen = AsyncMock(), AsyncMock()
mock_client_session_cls.return_value = MagicMock(client_handshake=mock_client_handshake, listen=mock_listen)
mock_reader, mock_writer = MagicMock(), MagicMock()
self.assertIsNone(await self.server._client_connected_cb(mock_reader, mock_writer))
mock_client_session_cls.assert_called_once_with(self.server, mock_reader, mock_writer)
mock_client_handshake.assert_awaited_once_with()
mock_listen.assert_awaited_once_with()
@patch.object(server.ControlServer, '_final_callback')
async def test__serve_forever(self, mock__final_callback: MagicMock):
mock_aenter, mock_serve_forever = AsyncMock(), AsyncMock(side_effect=asyncio.CancelledError)
self.server._server = MagicMock(__aenter__=mock_aenter, serve_forever=mock_serve_forever)
with self.assertLogs(server.log, logging.DEBUG):
self.assertIsNone(await self.server._serve_forever())
mock_aenter.assert_awaited_once_with()
mock_serve_forever.assert_awaited_once_with()
mock__final_callback.assert_called_once_with()
mock_aenter.reset_mock()
mock_serve_forever.reset_mock(side_effect=True)
mock__final_callback.reset_mock()
self.assertIsNone(await self.server._serve_forever())
mock_aenter.assert_awaited_once_with()
mock_serve_forever.assert_awaited_once_with()
mock__final_callback.assert_called_once_with()
@patch.object(server, 'create_task')
@patch.object(server.ControlServer, '_serve_forever', new_callable=MagicMock())
@patch.object(server.ControlServer, '_get_server_instance')
async def test_serve_forever(self, mock__get_server_instance: AsyncMock, mock__serve_forever: MagicMock,
mock_create_task: MagicMock):
mock__serve_forever.return_value = mock_awaitable = 'some_coroutine'
mock_create_task.return_value = expected_output = 12345
output = await self.server.serve_forever()
self.assertEqual(expected_output, output)
mock__get_server_instance.assert_awaited_once_with(self.server._client_connected_cb, **self.kwargs)
mock__serve_forever.assert_called_once_with()
mock_create_task.assert_called_once_with(mock_awaitable)
class TCPControlServerTestCase(IsolatedAsyncioTestCase):
log_lvl: int
@classmethod
def setUpClass(cls) -> None:
cls.log_lvl = server.log.level
server.log.setLevel(999)
@classmethod
def tearDownClass(cls) -> None:
server.log.setLevel(cls.log_lvl)
def setUp(self) -> None:
self.base_init_patcher = patch.object(server.ControlServer, '__init__')
self.mock_base_init = self.base_init_patcher.start()
self.mock_pool = MagicMock()
self.host, self.port = 'localhost', 12345
self.kwargs = {FOO: 123, BAR: 456}
self.server = server.TCPControlServer(pool=self.mock_pool, host=self.host, port=self.port, **self.kwargs)
def tearDown(self) -> None:
self.base_init_patcher.stop()
def test__client_class(self):
self.assertEqual(TCPControlClient, self.server._client_class)
def test_init(self):
self.assertEqual(self.host, self.server._host)
self.assertEqual(self.port, self.server._port)
self.mock_base_init.assert_called_once_with(self.mock_pool, **self.kwargs)
@patch.object(server, 'start_server')
async def test__get_server_instance(self, mock_start_server: AsyncMock):
mock_start_server.return_value = expected_output = 'totally_a_server'
mock_callback, mock_kwargs = MagicMock(), {'a': 1, 'b': 2}
args = [mock_callback]
output = await self.server._get_server_instance(*args, **mock_kwargs)
self.assertEqual(expected_output, output)
mock_start_server.assert_called_once_with(mock_callback, self.host, self.port, **mock_kwargs)
def test__final_callback(self):
self.assertIsNone(self.server._final_callback())
@skipIf(os.name == 'nt', "No Unix sockets on Windows :(")
class UnixControlServerTestCase(IsolatedAsyncioTestCase):
log_lvl: int
@classmethod
def setUpClass(cls) -> None:
cls.log_lvl = server.log.level
server.log.setLevel(999)
@classmethod
def tearDownClass(cls) -> None:
server.log.setLevel(cls.log_lvl)
def setUp(self) -> None:
self.base_init_patcher = patch.object(server.ControlServer, '__init__')
self.mock_base_init = self.base_init_patcher.start()
self.mock_pool = MagicMock()
self.path = '/tmp/asyncio_taskpool'
self.kwargs = {FOO: 123, BAR: 456}
self.server = server.UnixControlServer(pool=self.mock_pool, socket_path=self.path, **self.kwargs)
def tearDown(self) -> None:
self.base_init_patcher.stop()
def test__client_class(self):
self.assertEqual(UnixControlClient, self.server._client_class)
def test_init(self):
self.assertEqual(Path(self.path), self.server._socket_path)
self.mock_base_init.assert_called_once_with(self.mock_pool, **self.kwargs)
async def test__get_server_instance(self):
expected_output = 'totally_a_server'
self.server._start_unix_server = mock_start_unix_server = AsyncMock(return_value=expected_output)
mock_callback, mock_kwargs = MagicMock(), {'a': 1, 'b': 2}
args = [mock_callback]
output = await self.server._get_server_instance(*args, **mock_kwargs)
self.assertEqual(expected_output, output)
mock_start_unix_server.assert_called_once_with(mock_callback, Path(self.path), **mock_kwargs)
def test__final_callback(self):
self.server._socket_path = MagicMock()
self.assertIsNone(self.server._final_callback())
self.server._socket_path.unlink.assert_called_once_with()

View File

@ -0,0 +1,217 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.session` module.
"""
import json
from argparse import ArgumentError, Namespace
from io import StringIO
from unittest import IsolatedAsyncioTestCase
from unittest.mock import AsyncMock, MagicMock, patch, call
from asyncio_taskpool.control import session
from asyncio_taskpool.internals.constants import CLIENT_INFO, CMD
from asyncio_taskpool.exceptions import HelpRequested
from asyncio_taskpool.pool import SimpleTaskPool
FOO, BAR = 'foo', 'bar'
class ControlServerTestCase(IsolatedAsyncioTestCase):
log_lvl: int
@classmethod
def setUpClass(cls) -> None:
cls.log_lvl = session.log.level
session.log.setLevel(999)
@classmethod
def tearDownClass(cls) -> None:
session.log.setLevel(cls.log_lvl)
def setUp(self) -> None:
self.mock_pool = MagicMock(spec=SimpleTaskPool(AsyncMock()))
self.mock_client_class_name = FOO + BAR
self.mock_server = MagicMock(pool=self.mock_pool,
client_class_name=self.mock_client_class_name)
self.mock_reader = MagicMock()
self.mock_writer = MagicMock()
self.session = session.ControlSession(self.mock_server, self.mock_reader, self.mock_writer)
def test_init(self):
self.assertEqual(self.mock_server, self.session._control_server)
self.assertEqual(self.mock_pool, self.session._pool)
self.assertEqual(self.mock_client_class_name, self.session._client_class_name)
self.assertEqual(self.mock_reader, self.session._reader)
self.assertEqual(self.mock_writer, self.session._writer)
self.assertIsNone(self.session._parser)
self.assertIsInstance(self.session._response_buffer, StringIO)
@patch.object(session, 'return_or_exception')
async def test__exec_method_and_respond(self, mock_return_or_exception: AsyncMock):
def method(self, arg1, arg2, *var_args, **rest): pass
test_arg1, test_arg2, test_var_args, test_rest = 123, 'xyz', [0.1, 0.2, 0.3], {'aaa': 1, 'bbb': 11}
kwargs = {'arg1': test_arg1, 'arg2': test_arg2, 'var_args': test_var_args}
mock_return_or_exception.return_value = None
self.assertIsNone(await self.session._exec_method_and_respond(method, **kwargs, **test_rest))
mock_return_or_exception.assert_awaited_once_with(
method, self.mock_pool, test_arg1, test_arg2, *test_var_args, **test_rest
)
self.assertEqual(session.CMD_OK.decode(), self.session._response_buffer.getvalue())
@patch.object(session, 'return_or_exception')
async def test__exec_property_and_respond(self, mock_return_or_exception: AsyncMock):
def prop_get(_): pass
def prop_set(_): pass
prop = property(prop_get, prop_set)
kwargs = {'value': 'something'}
mock_return_or_exception.return_value = None
self.assertIsNone(await self.session._exec_property_and_respond(prop, **kwargs))
mock_return_or_exception.assert_awaited_once_with(prop_set, self.mock_pool, **kwargs)
self.assertEqual(session.CMD_OK.decode(), self.session._response_buffer.getvalue())
mock_return_or_exception.reset_mock()
self.session._response_buffer.seek(0)
self.session._response_buffer.truncate()
mock_return_or_exception.return_value = val = 420.69
self.assertIsNone(await self.session._exec_property_and_respond(prop))
mock_return_or_exception.assert_awaited_once_with(prop_get, self.mock_pool)
self.assertEqual(str(val), self.session._response_buffer.getvalue())
@patch.object(session, 'ControlParser')
async def test_client_handshake(self, mock_parser_cls: MagicMock):
mock_add_subparsers, mock_add_class_commands = MagicMock(), MagicMock()
mock_parser = MagicMock(add_subparsers=mock_add_subparsers, add_class_commands=mock_add_class_commands)
mock_parser_cls.return_value = mock_parser
width = 5678
msg = ' ' + json.dumps({CLIENT_INFO.TERMINAL_WIDTH: width, FOO: BAR}) + ' '
mock_readline = AsyncMock(return_value=msg.encode())
self.mock_reader.readline = mock_readline
self.mock_writer.drain = AsyncMock()
expected_parser_kwargs = {
'stream': self.session._response_buffer,
CLIENT_INFO.TERMINAL_WIDTH: width,
'prog': '',
'usage': f'[-h] [{CMD}] ...'
}
expected_subparsers_kwargs = {
'title': "Commands",
'metavar': "(A command followed by '-h' or '--help' will show command-specific help.)"
}
self.assertIsNone(await self.session.client_handshake())
self.assertEqual(mock_parser, self.session._parser)
mock_readline.assert_awaited_once_with()
mock_parser_cls.assert_called_once_with(**expected_parser_kwargs)
mock_add_subparsers.assert_called_once_with(**expected_subparsers_kwargs)
mock_add_class_commands.assert_called_once_with(self.mock_pool.__class__)
self.mock_writer.write.assert_called_once_with(str(self.mock_pool).encode() + b'\n')
self.mock_writer.drain.assert_awaited_once_with()
@patch.object(session.ControlSession, '_exec_property_and_respond')
@patch.object(session.ControlSession, '_exec_method_and_respond')
async def test__parse_command(self, mock__exec_method_and_respond: AsyncMock,
mock__exec_property_and_respond: AsyncMock):
def method(_): pass
prop = property(method)
msg = 'asdf asd as a'
kwargs = {FOO: BAR, 'hello': 'python'}
mock_parse_args = MagicMock(return_value=Namespace(**{CMD: method}, **kwargs))
self.session._parser = MagicMock(parse_args=mock_parse_args)
self.assertIsNone(await self.session._parse_command(msg))
mock_parse_args.assert_called_once_with(msg.split(' '))
self.assertEqual('', self.session._response_buffer.getvalue())
mock__exec_method_and_respond.assert_awaited_once_with(method, **kwargs)
mock__exec_property_and_respond.assert_not_called()
mock__exec_method_and_respond.reset_mock()
mock_parse_args.reset_mock()
mock_parse_args.return_value = Namespace(**{CMD: prop}, **kwargs)
self.assertIsNone(await self.session._parse_command(msg))
mock_parse_args.assert_called_once_with(msg.split(' '))
self.assertEqual('', self.session._response_buffer.getvalue())
mock__exec_method_and_respond.assert_not_called()
mock__exec_property_and_respond.assert_awaited_once_with(prop, **kwargs)
mock__exec_property_and_respond.reset_mock()
mock_parse_args.reset_mock()
bad_command = 'definitely not a function or property'
mock_parse_args.return_value = Namespace(**{CMD: bad_command}, **kwargs)
with patch.object(session, 'CommandError') as cmd_err_cls:
cmd_err_cls.return_value = exc = MagicMock()
self.assertIsNone(await self.session._parse_command(msg))
cmd_err_cls.assert_called_once_with(f"Unknown command object: {bad_command}")
mock_parse_args.assert_called_once_with(msg.split(' '))
mock__exec_method_and_respond.assert_not_called()
mock__exec_property_and_respond.assert_not_called()
self.assertEqual(str(exc), self.session._response_buffer.getvalue())
mock__exec_property_and_respond.reset_mock()
mock_parse_args.reset_mock()
self.session._response_buffer.seek(0)
self.session._response_buffer.truncate()
mock_parse_args.side_effect = exc = ArgumentError(MagicMock(), "oops")
self.assertIsNone(await self.session._parse_command(msg))
mock_parse_args.assert_called_once_with(msg.split(' '))
self.assertEqual(str(exc), self.session._response_buffer.getvalue())
mock__exec_method_and_respond.assert_not_awaited()
mock__exec_property_and_respond.assert_not_awaited()
mock_parse_args.reset_mock()
self.session._response_buffer.seek(0)
self.session._response_buffer.truncate()
mock_parse_args.side_effect = HelpRequested()
self.assertIsNone(await self.session._parse_command(msg))
mock_parse_args.assert_called_once_with(msg.split(' '))
self.assertEqual('', self.session._response_buffer.getvalue())
mock__exec_method_and_respond.assert_not_awaited()
mock__exec_property_and_respond.assert_not_awaited()
@patch.object(session.ControlSession, '_parse_command')
async def test_listen(self, mock__parse_command: AsyncMock):
def make_reader_return_empty():
self.mock_reader.readline.return_value = b''
self.mock_writer.drain = AsyncMock(side_effect=make_reader_return_empty)
msg = "fascinating"
self.mock_reader.readline = AsyncMock(return_value=f' {msg} '.encode())
response = FOO + BAR + FOO
self.session._response_buffer.write(response)
self.assertIsNone(await self.session.listen())
self.mock_reader.readline.assert_has_awaits([call(), call()])
mock__parse_command.assert_awaited_once_with(msg)
self.assertEqual('', self.session._response_buffer.getvalue())
self.mock_writer.write.assert_called_once_with(response.encode() + b'\n')
self.mock_writer.drain.assert_awaited_once_with()
self.mock_reader.readline.reset_mock()
mock__parse_command.reset_mock()
self.mock_writer.write.reset_mock()
self.mock_writer.drain.reset_mock()
self.mock_server.is_serving = MagicMock(return_value=False)
self.assertIsNone(await self.session.listen())
self.mock_reader.readline.assert_not_awaited()
mock__parse_command.assert_not_awaited()
self.mock_writer.write.assert_not_called()
self.mock_writer.drain.assert_not_awaited()

View File

View File

@ -0,0 +1,84 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.group_register` module.
"""
from asyncio.locks import Lock
from unittest import IsolatedAsyncioTestCase
from unittest.mock import MagicMock, patch
from asyncio_taskpool.internals import group_register
FOO, BAR = 'foo', 'bar'
class TaskGroupRegisterTestCase(IsolatedAsyncioTestCase):
def setUp(self) -> None:
self.reg = group_register.TaskGroupRegister()
def test_init(self):
ids = [FOO, BAR, 1, 2]
reg = group_register.TaskGroupRegister(*ids)
self.assertSetEqual(set(ids), reg._ids)
self.assertIsInstance(reg._lock, Lock)
def test___contains__(self):
self.reg._ids = {1, 2, 3}
for i in self.reg._ids:
self.assertTrue(i in self.reg)
self.assertFalse(4 in self.reg)
@patch.object(group_register, 'iter', return_value=FOO)
def test___iter__(self, mock_iter: MagicMock):
self.assertEqual(FOO, self.reg.__iter__())
mock_iter.assert_called_once_with(self.reg._ids)
def test___len__(self):
self.reg._ids = [1, 2, 3, 4]
self.assertEqual(4, len(self.reg))
def test_add(self):
self.assertSetEqual(set(), self.reg._ids)
self.assertIsNone(self.reg.add(123))
self.assertSetEqual({123}, self.reg._ids)
def test_discard(self):
self.reg._ids = {123}
self.assertIsNone(self.reg.discard(0))
self.assertIsNone(self.reg.discard(999))
self.assertIsNone(self.reg.discard(123))
self.assertSetEqual(set(), self.reg._ids)
async def test_acquire(self):
self.assertFalse(self.reg._lock.locked())
await self.reg.acquire()
self.assertTrue(self.reg._lock.locked())
def test_release(self):
self.reg._lock._locked = True
self.assertTrue(self.reg._lock.locked())
self.reg.release()
self.assertFalse(self.reg._lock.locked())
async def test_contextmanager(self):
self.assertFalse(self.reg._lock.locked())
async with self.reg as nothing:
self.assertIsNone(nothing)
self.assertTrue(self.reg._lock.locked())
self.assertFalse(self.reg._lock.locked())

View File

@ -0,0 +1,167 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.helpers` module.
"""
import importlib
from unittest import IsolatedAsyncioTestCase, TestCase
from unittest.mock import MagicMock, AsyncMock, NonCallableMagicMock, call, patch
from asyncio_taskpool.internals import constants
from asyncio_taskpool.internals import helpers
class HelpersTestCase(IsolatedAsyncioTestCase):
async def test_execute_optional(self):
f, args, kwargs = NonCallableMagicMock(), [1, 2], None
a = [f, args, kwargs] # to avoid IDE nagging
self.assertIsNone(await helpers.execute_optional(*a))
expected_output = 'foo'
f = MagicMock(return_value=expected_output)
output = await helpers.execute_optional(f, args, kwargs)
self.assertEqual(expected_output, output)
f.assert_called_once_with(*args)
f.reset_mock()
kwargs = {'a': 100, 'b': 200}
output = await helpers.execute_optional(f, args, kwargs)
self.assertEqual(expected_output, output)
f.assert_called_once_with(*args, **kwargs)
f = AsyncMock(return_value=expected_output)
output = await helpers.execute_optional(f, args, kwargs)
self.assertEqual(expected_output, output)
f.assert_awaited_once_with(*args, **kwargs)
def test_star_function(self):
expected_output = 'bar'
f = MagicMock(return_value=expected_output)
a = (1, 2, 3)
stars = 0
output = helpers.star_function(f, a, stars)
self.assertEqual(expected_output, output)
f.assert_called_once_with(a)
f.reset_mock()
stars = 1
output = helpers.star_function(f, a, stars)
self.assertEqual(expected_output, output)
f.assert_called_once_with(*a)
f.reset_mock()
a = {'a': 1, 'b': 2}
stars = 2
output = helpers.star_function(f, a, stars)
self.assertEqual(expected_output, output)
f.assert_called_once_with(**a)
with self.assertRaises(ValueError):
helpers.star_function(f, a, 3)
with self.assertRaises(ValueError):
helpers.star_function(f, a, -1)
with self.assertRaises(ValueError):
helpers.star_function(f, a, 123456789)
def test_get_first_doc_line(self):
expected_output = 'foo bar baz'
mock_obj = MagicMock(__doc__=f"""{expected_output}
something else
even more
""")
output = helpers.get_first_doc_line(mock_obj)
self.assertEqual(expected_output, output)
async def test_return_or_exception(self):
expected_output = '420'
mock_func = AsyncMock(return_value=expected_output)
args = (1, 3, 5)
kwargs = {'a': 1, 'b': 2, 'c': 'foo'}
output = await helpers.return_or_exception(mock_func, *args, **kwargs)
self.assertEqual(expected_output, output)
mock_func.assert_awaited_once_with(*args, **kwargs)
mock_func = MagicMock(return_value=expected_output)
output = await helpers.return_or_exception(mock_func, *args, **kwargs)
self.assertEqual(expected_output, output)
mock_func.assert_called_once_with(*args, **kwargs)
class TestException(Exception):
pass
test_exception = TestException()
mock_func = MagicMock(side_effect=test_exception)
output = await helpers.return_or_exception(mock_func, *args, **kwargs)
self.assertEqual(test_exception, output)
mock_func.assert_called_once_with(*args, **kwargs)
def test_resolve_dotted_path(self):
from logging import WARNING
from urllib.request import urlopen
self.assertEqual(WARNING, helpers.resolve_dotted_path('logging.WARNING'))
self.assertEqual(urlopen, helpers.resolve_dotted_path('urllib.request.urlopen'))
with patch.object(helpers, 'import_module', return_value=object) as mock_import_module:
with self.assertRaises(AttributeError):
helpers.resolve_dotted_path('foo.bar.baz')
mock_import_module.assert_has_calls([call('foo'), call('foo.bar')])
class ClassMethodWorkaroundTestCase(TestCase):
def test_init(self):
def func(): return 'foo'
def getter(): return 'bar'
prop = property(getter)
instance = helpers.ClassMethodWorkaround(func)
self.assertIs(func, instance._getter)
instance = helpers.ClassMethodWorkaround(prop)
self.assertIs(getter, instance._getter)
@patch.object(helpers.ClassMethodWorkaround, '__init__', return_value=None)
def test_get(self, _mock_init: MagicMock):
def func(x: MagicMock): return x.__name__
instance = helpers.ClassMethodWorkaround(MagicMock())
instance._getter = func
obj, cls = None, MagicMock
expected_output = 'MagicMock'
output = instance.__get__(obj, cls)
self.assertEqual(expected_output, output)
obj = MagicMock(__name__='bar')
expected_output = 'bar'
output = instance.__get__(obj, cls)
self.assertEqual(expected_output, output)
cls = None
output = instance.__get__(obj, cls)
self.assertEqual(expected_output, output)
def test_correct_class(self):
is_older_python = constants.PYTHON_BEFORE_39
try:
constants.PYTHON_BEFORE_39 = True
importlib.reload(helpers)
self.assertIs(helpers.ClassMethodWorkaround, helpers.classmethod)
constants.PYTHON_BEFORE_39 = False
importlib.reload(helpers)
self.assertIs(classmethod, helpers.classmethod)
finally:
constants.PYTHON_BEFORE_39 = is_older_python

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,43 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Unittests for the `asyncio_taskpool.queue_context` module.
"""
from unittest import IsolatedAsyncioTestCase
from unittest.mock import MagicMock, patch
from asyncio_taskpool.queue_context import Queue
class QueueTestCase(IsolatedAsyncioTestCase):
def test_item_processed(self):
queue = Queue()
queue._unfinished_tasks = 1000
queue.item_processed()
self.assertEqual(999, queue._unfinished_tasks)
@patch.object(Queue, 'item_processed')
async def test_contextmanager(self, mock_item_processed: MagicMock):
queue = Queue()
item = 'foo'
queue.put_nowait(item)
async with queue as item_from_queue:
self.assertEqual(item, item_from_queue)
mock_item_processed.assert_not_called()
mock_item_processed.assert_called_once_with()

View File

@ -1,18 +1,24 @@
# Using `asyncio-taskpool` # Using `asyncio-taskpool`
## Contents
- [Contents](#contents)
- [Minimal example for `SimpleTaskPool`](#minimal-example-for-simpletaskpool)
- [Advanced example for `TaskPool`](#advanced-example-for-taskpool)
- [Control server example](#control-server-example)
## Minimal example for `SimpleTaskPool` ## Minimal example for `SimpleTaskPool`
The minimum required setup is a "worker" coroutine function that can do something asynchronously, a main coroutine function that sets up the `SimpleTaskPool` and starts/stops the tasks as desired, eventually awaiting them all. With a `SimpleTaskPool` the function to execute as well as the arguments with which to execute it must be defined during its initialization (and they cannot be changed later). The only control you have after initialization is how many of such tasks are being run.
The following demo code enables full log output first for additional clarity. It is complete and should work as is. The minimum required setup is a "worker" coroutine function that can do something asynchronously, and a main coroutine function that sets up the `SimpleTaskPool`, starts/stops the tasks as desired, and eventually awaits them all.
The following demo script enables full log output first for additional clarity. It is complete and should work as is.
### Code
```python ```python
import logging import logging
import asyncio import asyncio
from asyncio_taskpool.pool import SimpleTaskPool from asyncio_taskpool import SimpleTaskPool
logging.getLogger().setLevel(logging.NOTSET) logging.getLogger().setLevel(logging.NOTSET)
logging.getLogger('asyncio_taskpool').addHandler(logging.StreamHandler()) logging.getLogger('asyncio_taskpool').addHandler(logging.StreamHandler())
@ -23,60 +29,238 @@ async def work(n: int) -> None:
Pseudo-worker function. Pseudo-worker function.
Counts up to an integer with a second of sleep before each iteration. Counts up to an integer with a second of sleep before each iteration.
In a real-world use case, a worker function should probably have access In a real-world use case, a worker function should probably have access
to some synchronisation primitive or shared resource to distribute work to some synchronisation primitive (such as a queue) or shared resource
between an arbitrary number of workers. to distribute work between an arbitrary number of workers.
""" """
for i in range(n): for i in range(n):
await asyncio.sleep(1) await asyncio.sleep(1)
print("did", i) print("> did", i)
async def main() -> None: async def main() -> None:
pool = SimpleTaskPool(work, (5,)) # initializes the pool; no work is being done yet pool = SimpleTaskPool(work, args=(5,)) # initializes the pool; no work is being done yet
await pool.start(3) # launches work tasks 0, 1, and 2 pool.start(3) # launches work tasks 0, 1, and 2
await asyncio.sleep(1.5) # lets the tasks work for a bit await asyncio.sleep(1.5) # lets the tasks work for a bit
await pool.start() # launches work task 3 pool.start(1) # launches work task 3
await asyncio.sleep(1.5) # lets the tasks work for a bit await asyncio.sleep(1.5) # lets the tasks work for a bit
pool.stop(2) # cancels tasks 3 and 2 pool.stop(2) # cancels tasks 3 and 2 (LIFO order)
pool.close() # required for the last line await pool.gather_and_close() # awaits all tasks, then flushes the pool
await pool.gather() # awaits all tasks, then flushes the pool
if __name__ == '__main__': if __name__ == '__main__':
asyncio.run(main()) asyncio.run(main())
``` ```
### Output <details>
<summary>Output: (Click to expand)</summary>
``` ```
SimpleTaskPool-0 initialized SimpleTaskPool-0 initialized
Started SimpleTaskPool-0_Task-0 Started SimpleTaskPool-0_Task-0
Started SimpleTaskPool-0_Task-1 Started SimpleTaskPool-0_Task-1
Started SimpleTaskPool-0_Task-2 Started SimpleTaskPool-0_Task-2
did 0 > did 0
did 0 > did 0
did 0 > did 0
Started SimpleTaskPool-0_Task-3 Started SimpleTaskPool-0_Task-3
did 1 > did 1
did 1 > did 1
did 1 > did 1
did 0 > did 0
SimpleTaskPool-0 is closed! > did 2
Cancelling SimpleTaskPool-0_Task-3 ... > did 2
Cancelled SimpleTaskPool-0_Task-3 SimpleTaskPool-0 is locked!
Ended SimpleTaskPool-0_Task-3
Cancelling SimpleTaskPool-0_Task-2 ... Cancelling SimpleTaskPool-0_Task-2 ...
Cancelled SimpleTaskPool-0_Task-2 Cancelled SimpleTaskPool-0_Task-2
Ended SimpleTaskPool-0_Task-2 Ended SimpleTaskPool-0_Task-2
did 2 Cancelling SimpleTaskPool-0_Task-3 ...
did 2 Cancelled SimpleTaskPool-0_Task-3
did 3 Ended SimpleTaskPool-0_Task-3
did 3 > did 3
> did 3
Ended SimpleTaskPool-0_Task-0 Ended SimpleTaskPool-0_Task-0
Ended SimpleTaskPool-0_Task-1 Ended SimpleTaskPool-0_Task-1
did 4 > did 4
did 4 > did 4
```
</details>
## Advanced example for `TaskPool`
This time, we want to start tasks from _different_ coroutine functions **and** with _different_ arguments. For this we need an instance of the more generalized `TaskPool` class.
As with the simple example, we need "worker" coroutine functions that can do something asynchronously, as well as a main coroutine function that sets up the pool, starts the tasks, and eventually awaits them.
The following demo script enables full log output first for additional clarity. It is complete and should work as is.
```python
import logging
import asyncio
from asyncio_taskpool import TaskPool
logging.getLogger().setLevel(logging.NOTSET)
logging.getLogger('asyncio_taskpool').addHandler(logging.StreamHandler())
async def work(start: int, stop: int, step: int = 1) -> None:
"""Pseudo-worker function counting through a range with a second of sleep in between each iteration."""
for i in range(start, stop, step):
await asyncio.sleep(1)
print("> work with", i)
async def other_work(a: int, b: int) -> None:
"""Different pseudo-worker counting through a range with half a second of sleep in between each iteration."""
for i in range(a, b):
await asyncio.sleep(0.5)
print("> other_work with", i)
async def main() -> None:
# Initialize a new task pool instance and limit its size to 3 tasks.
pool = TaskPool(3)
# Queue up two tasks (IDs 0 and 1) to run concurrently (with the same keyword-arguments).
print("> Called `apply`")
pool.apply(work, kwargs={'start': 100, 'stop': 200, 'step': 10}, num=2)
# Let the tasks work for a bit.
await asyncio.sleep(1.5)
# Now, let us enqueue four more tasks (which will receive IDs 2, 3, 4, and 5), each created with different
# positional arguments by using `starmap`, but we want no more than two of those to run concurrently.
# Since we set our pool size to 3, and already have two tasks working within the pool,
# only the first one of these will start immediately (and receive ID 2).
# The second one will start (with ID 3), only once there is room in the pool,
# which -- in this example -- will be the case after ID 2 ends.
# Once there is room in the pool again, the third and fourth will each start (with IDs 4 and 5)
# only once there is room in the pool and no more than one other task of these new ones is running.
args_list = [(0, 10), (10, 20), (20, 30), (30, 40)]
pool.starmap(other_work, args_list, num_concurrent=2)
print("> Called `starmap`")
# We block, until all tasks have ended.
print("> Calling `gather_and_close`...")
await pool.gather_and_close()
print("> Done.")
if __name__ == '__main__':
asyncio.run(main())
``` ```
## Advanced example <details>
<summary>Output: (Click to expand)</summary>
... ```
TaskPool-0 initialized
Started TaskPool-0_Task-0
Started TaskPool-0_Task-1
> Called `apply`
> work with 100
> work with 100
> Called `starmap` <--- notice that this immediately returns, even before Task-2 is started
> Calling `gather_and_close`... <--- this blocks `main()` until all tasks have ended
TaskPool-0 is locked!
Started TaskPool-0_Task-2 <--- at this point the pool is full
> work with 110
> work with 110
> other_work with 0
> other_work with 1
> work with 120
> work with 120
> other_work with 2
> other_work with 3
> work with 130
> work with 130
> other_work with 4
> other_work with 5
> work with 140
> work with 140
> other_work with 6
> other_work with 7
> work with 150
> work with 150
> other_work with 8
Ended TaskPool-0_Task-2 <--- this frees up room for one more task from `starmap`
Started TaskPool-0_Task-3
> other_work with 9
> work with 160
> work with 160
> other_work with 10
> other_work with 11
> work with 170
> work with 170
> other_work with 12
> other_work with 13
> work with 180
> work with 180
> other_work with 14
> other_work with 15
Ended TaskPool-0_Task-0
Ended TaskPool-0_Task-1 <--- these two end and free up two more slots in the pool
Started TaskPool-0_Task-4 <--- since `num_concurrent` is set to 2, Task-5 will not start
> work with 190
> work with 190
> other_work with 16
> other_work with 17
> other_work with 20
> other_work with 18
> other_work with 21
Ended TaskPool-0_Task-3 <--- now that only Task-4 of the group remains, Task-5 starts
Started TaskPool-0_Task-5
> other_work with 19
> other_work with 22
> other_work with 23
> other_work with 30
> other_work with 24
> other_work with 31
> other_work with 25
> other_work with 32
> other_work with 26
> other_work with 33
> other_work with 27
> other_work with 34
> other_work with 28
> other_work with 35
> other_work with 29
> other_work with 36
Ended TaskPool-0_Task-4
> other_work with 37
> other_work with 38
> other_work with 39
Ended TaskPool-0_Task-5
> Done.
```
(Added comments with `<---` next to the output lines.)
Keep in mind that the logger and `print` asynchronously write to `stdout`, so the order of lines in your output may be slightly different.
</details>
## Control server example
One of the main features of `asyncio-taskpool` is the ability to control a task pool "from the outside" at runtime.
The [example_server.py](./example_server.py) script launches a couple of worker tasks within a `SimpleTaskPool` instance and then starts a `TCPControlServer` instance for that task pool. The server is configured to locally bind to port `9999` and is stopped automatically after the "work" is done.
To run the script:
```shell
python usage/example_server.py
```
You can then connect to the server via the command line interface:
```shell
python -m asyncio_taskpool.control tcp localhost 9999
```
The CLI starts a `TCPControlClient` that connects to our example server. Once the connection is established, it gives you an input prompt allowing you to issue commands to the task pool:
```
Connected to SimpleTaskPool-0
Type '-h' to get help and usage instructions for all available commands.
>
```
It may be useful to run the server script and the client interface in two separate terminal windows side by side. The server script is configured with a verbose logger and will react to any commands issued by the client with detailed log messages in the terminal.
---
© 2022 Daniil Fajnberg

View File

@ -1,8 +1,31 @@
__author__ = "Daniil Fajnberg"
__copyright__ = "Copyright © 2022 Daniil Fajnberg"
__license__ = """GNU LGPLv3.0
This file is part of asyncio-taskpool.
asyncio-taskpool is free software: you can redistribute it and/or modify it under the terms of
version 3.0 of the GNU Lesser General Public License as published by the Free Software Foundation.
asyncio-taskpool is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with asyncio-taskpool.
If not, see <https://www.gnu.org/licenses/>."""
__doc__ = """
Working example of a TCPControlServer in combination with the SimpleTaskPool.
Use the main CLI client to interface at the socket.
"""
import asyncio import asyncio
import logging import logging
from asyncio_taskpool import SimpleTaskPool, UnixControlServer from asyncio_taskpool import SimpleTaskPool
from asyncio_taskpool.constants import PACKAGE_NAME from asyncio_taskpool.control import TCPControlServer
from asyncio_taskpool.internals.constants import PACKAGE_NAME
logging.getLogger().setLevel(logging.NOTSET) logging.getLogger().setLevel(logging.NOTSET)
@ -12,11 +35,11 @@ logging.getLogger(PACKAGE_NAME).addHandler(logging.StreamHandler())
async def work(item: int) -> None: async def work(item: int) -> None:
"""The non-blocking sleep simulates something like an I/O operation that can be done asynchronously.""" """The non-blocking sleep simulates something like an I/O operation that can be done asynchronously."""
await asyncio.sleep(1) await asyncio.sleep(1)
print("worked on", item) print("worked on", item, flush=True)
async def worker(q: asyncio.Queue) -> None: async def worker(q: asyncio.Queue) -> None:
"""Simulates doing asynchronous work that takes a little bit of time to finish.""" """Simulates doing asynchronous work that takes a bit of time to finish."""
# We only want the worker to stop, when its task is cancelled; therefore we start an infinite loop. # We only want the worker to stop, when its task is cancelled; therefore we start an infinite loop.
while True: while True:
# We want to block here, until we can get the next item from the queue. # We want to block here, until we can get the next item from the queue.
@ -43,21 +66,20 @@ async def main() -> None:
# We just put some integers into our queue, since all our workers actually do, is print an item and sleep for a bit. # We just put some integers into our queue, since all our workers actually do, is print an item and sleep for a bit.
for item in range(100): for item in range(100):
q.put_nowait(item) q.put_nowait(item)
pool = SimpleTaskPool(worker, (q,)) # initializes the pool pool = SimpleTaskPool(worker, args=(q,)) # initializes the pool
await pool.start(3) # launches three worker tasks pool.start(3) # launches three worker tasks
control_server_task = await UnixControlServer(pool, path='/tmp/py_asyncio_taskpool.sock').serve_forever() control_server_task = await TCPControlServer(pool, host='127.0.0.1', port=9999).serve_forever()
# We block until `.task_done()` has been called once by our workers for every item placed into the queue. # We block until `.task_done()` has been called once by our workers for every item placed into the queue.
await q.join() await q.join()
# Since we don't need any "work" done anymore, we can close our control server by cancelling the task. # Since we don't need any "work" done anymore, we can get rid of our control server by cancelling the task.
control_server_task.cancel() control_server_task.cancel()
# Since our workers should now be stuck waiting for more items to pick from the queue, but no items are left, # Since our workers should now be stuck waiting for more items to pick from the queue, but no items are left,
# we can now safely cancel their tasks. # we can now safely cancel their tasks.
pool.stop_all() pool.stop_all()
pool.close() # Finally, we allow for all tasks to do their cleanup (as if they need to do any) upon being cancelled.
# Finally we allow for all tasks to do do their cleanup, if they need to do any, upon being cancelled.
# We block until they all return or raise an exception, but since we are not interested in any of their exceptions, # We block until they all return or raise an exception, but since we are not interested in any of their exceptions,
# we just silently collect their exceptions along with their return values. # we just silently collect their exceptions along with their return values.
await pool.gather(return_exceptions=True) await pool.gather_and_close(return_exceptions=True)
await control_server_task await control_server_task