tqdm | A Fast , Extensible Progress Bar for Python and CLI | Command Line Interface library
kandi X-RAY | tqdm Summary
Support
Quality
Security
License
Reuse
- Process instances
- Get the lock
- Get instances of tqdm_cls
- Refresh the widget
- A context manager that allows you to change the bars
- Display the progress bar
- Close progress bar
- Format a time
- Format a time bar
- Run the progress bar
- Write text to chat
- Write text to file
- A parallel map function
- Decorator to return the shape of the screen
- Progress bar
- Convert docstring to rst
- Display the tqdm progress bar
- Set the description
- Writes a pipe to fout
- Edit the message text
- Returns a function that prints the status of the given file
- Reset the parameters
- Apply a function to a map
- A context manager that temporarily clears the bar
- Cast val to given typ
- Create a tqdm progress bar
- Initialize tqdm progress bar
tqdm Key Features
tqdm Examples and Code Snippets
from tqdm import tqdm for i in tqdm(range(10000)): ...
$ seq 9999999 | tqdm --bytes | wc -l 75.2MB [00:00, 217MB/s] 9999999
$ tar -zcf - docs/ | tqdm --bytes --total `du -sb docs/ | cut -f1` \ > backup.tgz 32%|██████████▍ | 8.89G/27.9G [00:42<01:31, 223MB/s]
.. contents:: Table of contents :backlinks: top :local: Installation ------------ Latest PyPI stable release ~~~~~~~~~~~~~~~~~~~~~~~~~~ |Versions| |PyPI-Downloads| |Libraries-Dependents| .. code:: sh pip install tqdm Latest development release on GitHub ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |GitHub-Status| |GitHub-Stars| |GitHub-Commits| |GitHub-Forks| |GitHub-Updated| Pull and install pre-release ``devel`` branch: .. code:: sh pip install "git+https://github.com/tqdm/tqdm.git@devel#egg=tqdm" Latest Conda release ~~~~~~~~~~~~~~~~~~~~ |Conda-Forge-Status| .. code:: sh conda install -c conda-forge tqdm Latest Snapcraft release ~~~~~~~~~~~~~~~~~~~~~~~~ |Snapcraft| There are 3 channels to choose from: .. code:: sh snap install tqdm # implies --stable, i.e. latest tagged release snap install tqdm --candidate # master branch snap install tqdm --edge # devel branch Note that ``snap`` binaries are purely for CLI use (not ``import``-able), and automatically set up ``bash`` tab-completion. Latest Docker release ~~~~~~~~~~~~~~~~~~~~~ |Docker| .. code:: sh docker pull tqdm/tqdm docker run -i --rm tqdm/tqdm --help Other ~~~~~ There are other (unofficial) places where ``tqdm`` may be downloaded, particularly for CLI use: |Repology| .. |Repology| image:: https://repology.org/badge/tiny-repos/python:tqdm.svg :target: https://repology.org/project/python:tqdm/versions Changelog --------- The list of all changes is available either on GitHub's Releases: |GitHub-Status|, on the `wiki `__, or on the `website `__. Usage ----- ``tqdm`` is very versatile and can be used in a number of ways. The three main ones are given below. Iterable-based ~~~~~~~~~~~~~~ Wrap ``tqdm()`` around any iterable: .. code:: python from tqdm import tqdm from time import sleep text = "" for char in tqdm(["a", "b", "c", "d"]): sleep(0.25) text = text + char ``trange(i)`` is a special optimised instance of ``tqdm(range(i))``: .. code:: python from tqdm import trange for i in trange(100): sleep(0.01) Instantiation outside of the loop allows for manual control over ``tqdm()``: .. code:: python pbar = tqdm(["a", "b", "c", "d"]) for char in pbar: sleep(0.25) pbar.set_description("Processing %s" % char) Manual ~~~~~~ Manual control of ``tqdm()`` updates using a ``with`` statement: .. code:: python with tqdm(total=100) as pbar: for i in range(10): sleep(0.1) pbar.update(10) If the optional variable ``total`` (or an iterable with ``len()``) is provided, predictive stats are displayed. ``with`` is also optional (you can just assign ``tqdm()`` to a variable, but in this case don't forget to ``del`` or ``close()`` at the end: .. code:: python pbar = tqdm(total=100) for i in range(10): sleep(0.1) pbar.update(10) pbar.close() Module ~~~~~~ Perhaps the most wonderful use of ``tqdm`` is in a script or on the command line. Simply inserting ``tqdm`` (or ``python -m tqdm``) between pipes will pass through all ``stdin`` to ``stdout`` while printing progress to ``stderr``. The example below demonstrate counting the number of lines in all Python files in the current directory, with timing information included. .. code:: sh $ time find . -name '*.py' -type f -exec cat \{} \; | wc -l 857365 real 0m3.458s user 0m0.274s sys 0m3.325s $ time find . -name '*.py' -type f -exec cat \{} \; | tqdm | wc -l 857366it [00:03, 246471.31it/s] 857365 real 0m3.585s user 0m0.862s sys 0m3.358s Note that the usual arguments for ``tqdm`` can also be specified. .. code:: sh $ find . -name '*.py' -type f -exec cat \{} \; | tqdm --unit loc --unit_scale --total 857366 >> /dev/null 100%|█████████████████████████████████| 857K/857K [00:04<00:00, 246Kloc/s] Backing up a large directory? .. code:: sh $ tar -zcf - docs/ | tqdm --bytes --total `du -sb docs/ | cut -f1` \ > backup.tgz 44%|██████████████▊ | 153M/352M [00:14<00:18, 11.0MB/s] This can be beautified further: .. code:: sh $ BYTES="$(du -sb docs/ | cut -f1)" $ tar -cf - docs/ \ | tqdm --bytes --total "$BYTES" --desc Processing | gzip \ | tqdm --bytes --total "$BYTES" --desc Compressed --position 1 \ > ~/backup.tgz Processing: 100%|██████████████████████| 352M/352M [00:14<00:00, 30.2MB/s] Compressed: 42%|█████████▎ | 148M/352M [00:14<00:19, 10.9MB/s] Or done on a file level using 7-zip: .. code:: sh $ 7z a -bd -r backup.7z docs/ | grep Compressing \ | tqdm --total $(find docs/ -type f | wc -l) --unit files \ | grep -v Compressing 100%|██████████████████████████▉| 15327/15327 [01:00<00:00, 712.96files/s] Pre-existing CLI programs already outputting basic progress information will benefit from ``tqdm``'s ``--update`` and ``--update_to`` flags: .. code:: sh $ seq 3 0.1 5 | tqdm --total 5 --update_to --null 100%|████████████████████████████████████| 5.0/5 [00:00<00:00, 9673.21it/s] $ seq 10 | tqdm --update --null # 1 + 2 + ... + 10 = 55 iterations 55it [00:00, 90006.52it/s] FAQ and Known Issues -------------------- |GitHub-Issues| The most common issues relate to excessive output on multiple lines, instead of a neat one-line progress bar. - Consoles in general: require support for carriage return (``CR``, ``\r``). - Nested progress bars: * Consoles in general: require support for moving cursors up to the previous line. For example, `IDLE `__, `ConEmu `__ and `PyCharm `__ (also `here `__, `here `__, and `here `__) lack full support. * Windows: additionally may require the Python module ``colorama`` to ensure nested bars stay within their respective lines. - Unicode: * Environments which report that they support unicode will have solid smooth progressbars. The fallback is an ``ascii``-only bar. * Windows consoles often only partially support unicode and thus `often require explicit ascii=True `__ (also `here `__). This is due to either normal-width unicode characters being incorrectly displayed as "wide", or some unicode characters not rendering. - Wrapping generators: * Generator wrapper functions tend to hide the length of iterables. ``tqdm`` does not. * Replace ``tqdm(enumerate(...))`` with ``enumerate(tqdm(...))`` or ``tqdm(enumerate(x), total=len(x), ...)``. The same applies to ``numpy.ndenumerate``. * Replace ``tqdm(zip(a, b))`` with ``zip(tqdm(a), b)`` or even ``zip(tqdm(a), tqdm(b))``. * The same applies to ``itertools``. * Some useful convenience functions can be found under ``tqdm.contrib``. - `Hanging pipes in python2 `__: when using ``tqdm`` on the CLI, you may need to use Python 3.5+ for correct buffering. - `No intermediate output in docker-compose `__: use ``docker-compose run`` instead of ``docker-compose up`` and ``tty: true``. If you come across any other difficulties, browse and file |GitHub-Issues|. Documentation ------------- |Py-Versions| |README-Hits| (Since 19 May 2016) .. code:: python class tqdm(): """ Decorate an iterable object, returning an iterator which acts exactly like the original iterable, but prints a dynamically updating progressbar every time a value is requested. """ def __init__(self, iterable=None, desc=None, total=None, leave=True, file=None, ncols=None, mininterval=0.1, maxinterval=10.0, miniters=None, ascii=None, disable=False, unit='it', unit_scale=False, dynamic_ncols=False, smoothing=0.3, bar_format=None, initial=0, position=None, postfix=None, unit_divisor=1000): Parameters ~~~~~~~~~~ * iterable : iterable, optional Iterable to decorate with a progressbar. Leave blank to manually manage the updates. * desc : str, optional Prefix for the progressbar. * total : int or float, optional The number of expected iterations. If unspecified, len(iterable) is used if possible. If float("inf") or as a last resort, only basic progress statistics are displayed (no ETA, no progressbar). If ``gui`` is True and this parameter needs subsequent updating, specify an initial arbitrary large positive number, e.g. 9e9. * leave : bool, optional If [default: True], keeps all traces of the progressbar upon termination of iteration. If ``None``, will leave only if ``position`` is ``0``. * file : ``io.TextIOWrapper`` or ``io.StringIO``, optional Specifies where to output the progress messages (default: sys.stderr). Uses ``file.write(str)`` and ``file.flush()`` methods. For encoding, see ``write_bytes``. * ncols : int, optional The width of the entire output message. If specified, dynamically resizes the progressbar to stay within this bound. If unspecified, attempts to use environment width. The fallback is a meter width of 10 and no limit for the counter and statistics. If 0, will not print any meter (only stats). * mininterval : float, optional Minimum progress display update interval [default: 0.1] seconds. * maxinterval : float, optional Maximum progress display update interval [default: 10] seconds. Automatically adjusts ``miniters`` to correspond to ``mininterval`` after long display update lag. Only works if ``dynamic_miniters`` or monitor thread is enabled. * miniters : int or float, optional Minimum progress display update interval, in iterations. If 0 and ``dynamic_miniters``, will automatically adjust to equal ``mininterval`` (more CPU efficient, good for tight loops). If > 0, will skip display of specified number of iterations. Tweak this and ``mininterval`` to get very efficient loops. If your progress is erratic with both fast and slow iterations (network, skipping items, etc) you should set miniters=1. * ascii : bool or str, optional If unspecified or False, use unicode (smooth blocks) to fill the meter. The fallback is to use ASCII characters " 123456789#". * disable : bool, optional Whether to disable the entire progressbar wrapper [default: False]. If set to None, disable on non-TTY. * unit : str, optional String that will be used to define the unit of each iteration [default: it]. * unit_scale : bool or int or float, optional If 1 or True, the number of iterations will be reduced/scaled automatically and a metric prefix following the International System of Units standard will be added (kilo, mega, etc.) [default: False]. If any other non-zero number, will scale ``total`` and ``n``. * dynamic_ncols : bool, optional If set, constantly alters ``ncols`` and ``nrows`` to the environment (allowing for window resizes) [default: False]. * smoothing : float, optional Exponential moving average smoothing factor for speed estimates (ignored in GUI mode). Ranges from 0 (average speed) to 1 (current/instantaneous speed) [default: 0.3]. * bar_format : str, optional Specify a custom bar string formatting. May impact performance. [default: '{l_bar}{bar}{r_bar}'], where l_bar='{desc}: {percentage:3.0f}%|' and r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, ' '{rate_fmt}{postfix}]' Possible vars: l_bar, bar, r_bar, n, n_fmt, total, total_fmt, percentage, elapsed, elapsed_s, ncols, nrows, desc, unit, rate, rate_fmt, rate_noinv, rate_noinv_fmt, rate_inv, rate_inv_fmt, postfix, unit_divisor, remaining, remaining_s, eta. Note that a trailing ": " is automatically removed after {desc} if the latter is empty. * initial : int or float, optional The initial counter value. Useful when restarting a progress bar [default: 0]. If using float, consider specifying ``{n:.3f}`` or similar in ``bar_format``, or specifying ``unit_scale``. * position : int, optional Specify the line offset to print this bar (starting from 0) Automatic if unspecified. Useful to manage multiple bars at once (eg, from threads). * postfix : dict or ``*``, optional Specify additional stats to display at the end of the bar. Calls ``set_postfix(**postfix)`` if possible (dict). * unit_divisor : float, optional [default: 1000], ignored unless ``unit_scale`` is True. * write_bytes : bool, optional If (default: None) and ``file`` is unspecified, bytes will be written in Python 2. If ``True`` will also write bytes. In all other cases will default to unicode. * lock_args : tuple, optional Passed to ``refresh`` for intermediate output (initialisation, iterating, and updating). * nrows : int, optional The screen height. If specified, hides nested bars outside this bound. If unspecified, attempts to use environment height. The fallback is 20. * colour : str, optional Bar colour (e.g. 'green', '#00ff00'). * delay : float, optional Don't display until [default: 0] seconds have elapsed. Extra CLI Options ~~~~~~~~~~~~~~~~~ * delim : chr, optional Delimiting character [default: '\n']. Use '\0' for null. N.B.: on Windows systems, Python converts '\n' to '\r\n'. * buf_size : int, optional String buffer size in bytes [default: 256] used when ``delim`` is specified. * bytes : bool, optional If true, will count bytes, ignore ``delim``, and default ``unit_scale`` to True, ``unit_divisor`` to 1024, and ``unit`` to 'B'. * tee : bool, optional If true, passes ``stdin`` to both ``stderr`` and ``stdout``. * update : bool, optional If true, will treat input as newly elapsed iterations, i.e. numbers to pass to ``update()``. Note that this is slow (~2e5 it/s) since every input must be decoded as a number. * update_to : bool, optional If true, will treat input as total elapsed iterations, i.e. numbers to assign to ``self.n``. Note that this is slow (~2e5 it/s) since every input must be decoded as a number. * null : bool, optional If true, will discard input (no stdout). * manpath : str, optional Directory in which to install tqdm man pages. * comppath : str, optional Directory in which to place tqdm completion. * log : str, optional CRITICAL|FATAL|ERROR|WARN(ING)|[default: 'INFO']|DEBUG|NOTSET. Returns ~~~~~~~ * out : decorated iterator. .. code:: python class tqdm(): def update(self, n=1): """ Manually update the progress bar, useful for streams such as reading files. E.g.: >>> t = tqdm(total=filesize) # Initialise >>> for current_buffer in stream: ... ... ... t.update(len(current_buffer)) >>> t.close() The last line is highly recommended, but possibly not necessary if ``t.update()`` will be called in such a way that ``filesize`` will be exactly reached and printed. Parameters ---------- n : int or float, optional Increment to add to the internal counter of iterations [default: 1]. If using float, consider specifying ``{n:.3f}`` or similar in ``bar_format``, or specifying ``unit_scale``. Returns ------- out : bool or None True if a ``display()`` was triggered. """ def close(self): """Cleanup and (if leave=False) close the progressbar.""" def clear(self, nomove=False): """Clear current bar display.""" def refresh(self): """ Force refresh the display of this bar. Parameters ---------- nolock : bool, optional If ``True``, does not lock. If [default: ``False``]: calls ``acquire()`` on internal lock. lock_args : tuple, optional Passed to internal lock's ``acquire()``. If specified, will only ``display()`` if ``acquire()`` returns ``True``. """ def unpause(self): """Restart tqdm timer from last print time.""" def reset(self, total=None): """ Resets to 0 iterations for repeated use. Consider combining with ``leave=True``. Parameters ---------- total : int or float, optional. Total to use for the new bar. """ def set_description(self, desc=None, refresh=True): """ Set/modify description of the progress bar. Parameters ---------- desc : str, optional refresh : bool, optional Forces refresh [default: True]. """ def set_postfix(self, ordered_dict=None, refresh=True, **tqdm_kwargs): """ Set/modify postfix (additional stats) with automatic formatting based on datatype. Parameters ---------- ordered_dict : dict or OrderedDict, optional refresh : bool, optional Forces refresh [default: True]. kwargs : dict, optional """ @classmethod def write(cls, s, file=sys.stdout, end="\n"): """Print a message via tqdm (without overlap with bars).""" @property def format_dict(self): """Public API for read-only member access.""" def display(self, msg=None, pos=None): """ Use ``self.sp`` to display ``msg`` in the specified ``pos``. Consider overloading this function when inheriting to use e.g.: ``self.some_frontend(**self.format_dict)`` instead of ``self.sp``. Parameters ---------- msg : str, optional. What to display (default: ``repr(self)``). pos : int, optional. Position to ``moveto`` (default: ``abs(self.pos)``). """ @classmethod @contextmanager def wrapattr(cls, stream, method, total=None, bytes=True, **tqdm_kwargs): """ stream : file-like object. method : str, "read" or "write". The result of ``read()`` and the first argument of ``write()`` should have a ``len()``. >>> with tqdm.wrapattr(file_obj, "read", total=file_obj.size) as fobj: ... while True: ... chunk = fobj.read(chunk_size) ... if not chunk: ... break """ @classmethod def pandas(cls, *targs, **tqdm_kwargs): """Registers the current `tqdm` class with `pandas`.""" def trange(*args, **tqdm_kwargs): """ A shortcut for `tqdm(xrange(*args), **tqdm_kwargs)`. On Python3+, `range` is used instead of `xrange`. """ Convenience Functions ~~~~~~~~~~~~~~~~~~~~~ .. code:: python def tqdm.contrib.tenumerate(iterable, start=0, total=None, tqdm_class=tqdm.auto.tqdm, **tqdm_kwargs): """Equivalent of `numpy.ndenumerate` or builtin `enumerate`.""" def tqdm.contrib.tzip(iter1, *iter2plus, **tqdm_kwargs): """Equivalent of builtin `zip`.""" def tqdm.contrib.tmap(function, *sequences, **tqdm_kwargs): """Equivalent of builtin `map`.""" Submodules ~~~~~~~~~~ .. code:: python class tqdm.notebook.tqdm(tqdm.tqdm): """IPython/Jupyter Notebook widget.""" class tqdm.auto.tqdm(tqdm.tqdm): """Automatically chooses beween `tqdm.notebook` and `tqdm.tqdm`.""" class tqdm.asyncio.tqdm(tqdm.tqdm): """Asynchronous version.""" @classmethod def as_completed(cls, fs, *, loop=None, timeout=None, total=None, **tqdm_kwargs): """Wrapper for `asyncio.as_completed`.""" class tqdm.gui.tqdm(tqdm.tqdm): """Matplotlib GUI version.""" class tqdm.tk.tqdm(tqdm.tqdm): """Tkinter GUI version.""" class tqdm.rich.tqdm(tqdm.tqdm): """`rich.progress` version.""" class tqdm.keras.TqdmCallback(keras.callbacks.Callback): """Keras callback for epoch and batch progress.""" class tqdm.dask.TqdmCallback(dask.callbacks.Callback): """Dask callback for task progress.""" ``contrib`` +++++++++++ The ``tqdm.contrib`` package also contains experimental modules: - ``tqdm.contrib.itertools``: Thin wrappers around ``itertools`` - ``tqdm.contrib.concurrent``: Thin wrappers around ``concurrent.futures`` - ``tqdm.contrib.slack``: Posts to `Slack `__ bots - ``tqdm.contrib.discord``: Posts to `Discord `__ bots - ``tqdm.contrib.telegram``: Posts to `Telegram `__ bots - ``tqdm.contrib.bells``: Automagically enables all optional features * ``auto``, ``pandas``, ``slack``, ``discord``, ``telegram`` Examples and Advanced Usage --------------------------- - See the `examples `__ folder; - import the module and run ``help()``; - consult the `wiki `__; * this has an `excellent article `__ on how to make a **great** progressbar; - check out the `slides from PyData London `__, or - run the |binder-demo|. Description and additional stats ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Custom information can be displayed and updated dynamically on ``tqdm`` bars with the ``desc`` and ``postfix`` arguments: .. code:: python from tqdm import tqdm, trange from random import random, randint from time import sleep with trange(10) as t: for i in t: # Description will be displayed on the left t.set_description('GEN %i' % i) # Postfix will be displayed on the right, # formatted automatically based on argument's datatype t.set_postfix(loss=random(), gen=randint(1,999), str='h', lst=[1, 2]) sleep(0.1) with tqdm(total=10, bar_format="{postfix[0]} {postfix[1][value]:>8.2g}", postfix=["Batch", dict(value=0)]) as t: for i in range(10): sleep(0.1) t.postfix[1]["value"] = i / 2 t.update() Points to remember when using ``{postfix[...]}`` in the ``bar_format`` string: - ``postfix`` also needs to be passed as an initial argument in a compatible format, and - ``postfix`` will be auto-converted to a string if it is a ``dict``-like object. To prevent this behaviour, insert an extra item into the dictionary where the key is not a string. Additional ``bar_format`` parameters may also be defined by overriding ``format_dict``, and the bar itself may be modified using ``ascii``: .. code:: python from tqdm import tqdm class TqdmExtraFormat(tqdm): """Provides a `total_time` format parameter""" @property def format_dict(self): d = super(TqdmExtraFormat, self).format_dict total_time = d["elapsed"] * (d["total"] or 0) / max(d["n"], 1) d.update(total_time=self.format_interval(total_time) + " in total") return d for i in TqdmExtraFormat( range(9), ascii=" .oO0", bar_format="{total_time}: {percentage:.0f}%|{bar}{r_bar}"): if i == 4: break .. code:: 00:00 in total: 44%|0000. | 4/9 [00:00<00:00, 962.93it/s] Note that ``{bar}`` also supports a format specifier ``[width][type]``. - ``width`` * unspecified (default): automatic to fill ``ncols`` * ``int >= 0``: fixed width overriding ``ncols`` logic * ``int < 0``: subtract from the automatic default - ``type`` * ``a``: ascii (``ascii=True`` override) * ``u``: unicode (``ascii=False`` override) * ``b``: blank (``ascii=" "`` override) This means a fixed bar with right-justified text may be created by using: ``bar_format="{l_bar}{bar:10}|{bar:-10b}right-justified"`` Nested progress bars ~~~~~~~~~~~~~~~~~~~~ ``tqdm`` supports nested progress bars. Here's an example: .. code:: python from tqdm.auto import trange from time import sleep for i in trange(4, desc='1st loop'): for j in trange(5, desc='2nd loop'): for k in trange(50, desc='3rd loop', leave=False): sleep(0.01) For manual control over positioning (e.g. for multi-processing use), you may specify ``position=n`` where ``n=0`` for the outermost bar, ``n=1`` for the next, and so on. However, it's best to check if ``tqdm`` can work without manual ``position`` first. .. code:: python from time import sleep from tqdm import trange, tqdm from multiprocessing import Pool, RLock, freeze_support L = list(range(9)) def progresser(n): interval = 0.001 / (n + 2) total = 5000 text = "#{}, est. {:<04.2}s".format(n, interval * total) for _ in trange(total, desc=text, position=n): sleep(interval) if __name__ == '__main__': freeze_support() # for Windows support tqdm.set_lock(RLock()) # for managing output contention p = Pool(initializer=tqdm.set_lock, initargs=(tqdm.get_lock(),)) p.map(progresser, L) Note that in Python 3, ``tqdm.write`` is thread-safe: .. code:: python from time import sleep from tqdm import tqdm, trange from concurrent.futures import ThreadPoolExecutor L = list(range(9)) def progresser(n): interval = 0.001 / (n + 2) total = 5000 text = "#{}, est. {:<04.2}s".format(n, interval * total) for _ in trange(total, desc=text): sleep(interval) if n == 6: tqdm.write("n == 6 completed.") tqdm.write("`tqdm.write()` is thread-safe in py3!") if __name__ == '__main__': with ThreadPoolExecutor() as p: p.map(progresser, L) Hooks and callbacks ~~~~~~~~~~~~~~~~~~~ ``tqdm`` can easily support callbacks/hooks and manual updates. Here's an example with ``urllib``: **``urllib.urlretrieve`` documentation** | [...] | If present, the hook function will be called once | on establishment of the network connection and once after each block read | thereafter. The hook will be passed three arguments; a count of blocks | transferred so far, a block size in bytes, and the total size of the file. | [...] .. code:: python import urllib, os from tqdm import tqdm urllib = getattr(urllib, 'request', urllib) class TqdmUpTo(tqdm): """Provides `update_to(n)` which uses `tqdm.update(delta_n)`.""" def update_to(self, b=1, bsize=1, tsize=None): """ b : int, optional Number of blocks transferred so far [default: 1]. bsize : int, optional Size of each block (in tqdm units) [default: 1]. tsize : int, optional Total size (in tqdm units). If [default: None] remains unchanged. """ if tsize is not None: self.total = tsize return self.update(b * bsize - self.n) # also sets self.n = b * bsize eg_link = "https://caspersci.uk.to/matryoshka.zip" with TqdmUpTo(unit='B', unit_scale=True, unit_divisor=1024, miniters=1, desc=eg_link.split('/')[-1]) as t: # all optional kwargs urllib.urlretrieve(eg_link, filename=os.devnull, reporthook=t.update_to, data=None) t.total = t.n Inspired by `twine#242 `__. Functional alternative in `examples/tqdm_wget.py `__. It is recommend to use ``miniters=1`` whenever there is potentially large differences in iteration speed (e.g. downloading a file over a patchy connection). **Wrapping read/write methods** To measure throughput through a file-like object's ``read`` or ``write`` methods, use ``CallbackIOWrapper``: .. code:: python from tqdm.auto import tqdm from tqdm.utils import CallbackIOWrapper with tqdm(total=file_obj.size, unit='B', unit_scale=True, unit_divisor=1024) as t: fobj = CallbackIOWrapper(t.update, file_obj, "read") while True: chunk = fobj.read(chunk_size) if not chunk: break t.reset() # ... continue to use `t` for something else Alternatively, use the even simpler ``wrapattr`` convenience function, which would condense both the ``urllib`` and ``CallbackIOWrapper`` examples down to: .. code:: python import urllib, os from tqdm import tqdm eg_link = "https://caspersci.uk.to/matryoshka.zip" response = getattr(urllib, 'request', urllib).urlopen(eg_link) with tqdm.wrapattr(open(os.devnull, "wb"), "write", miniters=1, desc=eg_link.split('/')[-1], total=getattr(response, 'length', None)) as fout: for chunk in response: fout.write(chunk) The ``requests`` equivalent is nearly identical: .. code:: python import requests, os from tqdm import tqdm eg_link = "https://caspersci.uk.to/matryoshka.zip" response = requests.get(eg_link, stream=True) with tqdm.wrapattr(open(os.devnull, "wb"), "write", miniters=1, desc=eg_link.split('/')[-1], total=int(response.headers.get('content-length', 0))) as fout: for chunk in response.iter_content(chunk_size=4096): fout.write(chunk) **Custom callback** ``tqdm`` is known for intelligently skipping unnecessary displays. To make a custom callback take advantage of this, simply use the return value of ``update()``. This is set to ``True`` if a ``display()`` was triggered. .. code:: python from tqdm.auto import tqdm as std_tqdm def external_callback(*args, **kwargs): ... class TqdmExt(std_tqdm): def update(self, n=1): displayed = super(TqdmExt, self).update(n) if displayed: external_callback(**self.format_dict) return displayed ``asyncio`` ~~~~~~~~~~~ Note that ``break`` isn't currently caught by asynchronous iterators. This means that ``tqdm`` cannot clean up after itself in this case: .. code:: python from tqdm.asyncio import tqdm async for i in tqdm(range(9)): if i == 2: break Instead, either call ``pbar.close()`` manually or use the context manager syntax: .. code:: python from tqdm.asyncio import tqdm with tqdm(range(9)) as pbar: async for i in pbar: if i == 2: break Pandas Integration ~~~~~~~~~~~~~~~~~~ Due to popular demand we've added support for ``pandas`` -- here's an example for ``DataFrame.progress_apply`` and ``DataFrameGroupBy.progress_apply``: .. code:: python import pandas as pd import numpy as np from tqdm import tqdm df = pd.DataFrame(np.random.randint(0, 100, (100000, 6))) # Register `pandas.progress_apply` and `pandas.Series.map_apply` with `tqdm` # (can use `tqdm.gui.tqdm`, `tqdm.notebook.tqdm`, optional kwargs, etc.) tqdm.pandas(desc="my bar!") # Now you can use `progress_apply` instead of `apply` # and `progress_map` instead of `map` df.progress_apply(lambda x: x**2) # can also groupby: # df.groupby(0).progress_apply(lambda x: x**2) In case you're interested in how this works (and how to modify it for your own callbacks), see the `examples `__ folder or import the module and run ``help()``. Keras Integration ~~~~~~~~~~~~~~~~~ A ``keras`` callback is also available: .. code:: python from tqdm.keras import TqdmCallback ... model.fit(..., verbose=0, callbacks=[TqdmCallback()]) Dask Integration ~~~~~~~~~~~~~~~~ A ``dask`` callback is also available: .. code:: python from tqdm.dask import TqdmCallback with TqdmCallback(desc="compute"): ... arr.compute() # or use callback globally cb = TqdmCallback(desc="global") cb.register() arr.compute() IPython/Jupyter Integration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ IPython/Jupyter is supported via the ``tqdm.notebook`` submodule: .. code:: python from tqdm.notebook import trange, tqdm from time import sleep for i in trange(3, desc='1st loop'): for j in tqdm(range(100), desc='2nd loop'): sleep(0.01) In addition to ``tqdm`` features, the submodule provides a native Jupyter widget (compatible with IPython v1-v4 and Jupyter), fully working nested bars and colour hints (blue: normal, green: completed, red: error/interrupt, light blue: no ETA); as demonstrated below. |Screenshot-Jupyter1| |Screenshot-Jupyter2| |Screenshot-Jupyter3| The ``notebook`` version supports percentage or pixels for overall width (e.g.: ``ncols='100%'`` or ``ncols='480px'``). It is also possible to let ``tqdm`` automatically choose between console or notebook versions by using the ``autonotebook`` submodule: .. code:: python from tqdm.autonotebook import tqdm tqdm.pandas() Note that this will issue a ``TqdmExperimentalWarning`` if run in a notebook since it is not meant to be possible to distinguish between ``jupyter notebook`` and ``jupyter console``. Use ``auto`` instead of ``autonotebook`` to suppress this warning. Note that notebooks will display the bar in the cell where it was created. This may be a different cell from the one where it is used. If this is not desired, either - delay the creation of the bar to the cell where it must be displayed, or - create the bar with ``display=False``, and in a later cell call ``display(bar.container)``: .. code:: python from tqdm.notebook import tqdm pbar = tqdm(..., display=False) .. code:: python # different cell display(pbar.container) The ``keras`` callback has a ``display()`` method which can be used likewise: .. code:: python from tqdm.keras import TqdmCallback cbk = TqdmCallback(display=False) .. code:: python # different cell cbk.display() model.fit(..., verbose=0, callbacks=[cbk]) Another possibility is to have a single bar (near the top of the notebook) which is constantly re-used (using ``reset()`` rather than ``close()``). For this reason, the notebook version (unlike the CLI version) does not automatically call ``close()`` upon ``Exception``. .. code:: python from tqdm.notebook import tqdm pbar = tqdm() .. code:: python # different cell iterable = range(100) pbar.reset(total=len(iterable)) # initialise with new `total` for i in iterable: pbar.update() pbar.refresh() # force print final status but don't `close()` Custom Integration ~~~~~~~~~~~~~~~~~~ To change the default arguments (such as making ``dynamic_ncols=True``), simply use built-in Python magic: .. code:: python from functools import partial from tqdm import tqdm as std_tqdm tqdm = partial(std_tqdm, dynamic_ncols=True) For further customisation, ``tqdm`` may be inherited from to create custom callbacks (as with the ``TqdmUpTo`` example `above <#hooks-and-callbacks>`__) or for custom frontends (e.g. GUIs such as notebook or plotting packages). In the latter case: 1. ``def __init__()`` to call ``super().__init__(..., gui=True)`` to disable terminal ``status_printer`` creation. 2. Redefine: ``close()``, ``clear()``, ``display()``. Consider overloading ``display()`` to use e.g. ``self.frontend(**self.format_dict)`` instead of ``self.sp(repr(self))``. Some submodule examples of inheritance: - `tqdm/notebook.py `__ - `tqdm/gui.py `__ - `tqdm/tk.py `__ - `tqdm/contrib/slack.py `__ - `tqdm/contrib/discord.py `__ - `tqdm/contrib/telegram.py `__ Dynamic Monitor/Meter ~~~~~~~~~~~~~~~~~~~~~ You can use a ``tqdm`` as a meter which is not monotonically increasing. This could be because ``n`` decreases (e.g. a CPU usage monitor) or ``total`` changes. One example would be recursively searching for files. The ``total`` is the number of objects found so far, while ``n`` is the number of those objects which are files (rather than folders): .. code:: python from tqdm import tqdm import os.path def find_files_recursively(path, show_progress=True): files = [] # total=1 assumes `path` is a file t = tqdm(total=1, unit="file", disable=not show_progress) if not os.path.exists(path): raise IOError("Cannot find:" + path) def append_found_file(f): files.append(f) t.update() def list_found_dir(path): """returns os.listdir(path) assuming os.path.isdir(path)""" listing = os.listdir(path) # subtract 1 since a "file" we found was actually this directory t.total += len(listing) - 1 # fancy way to give info without forcing a refresh t.set_postfix(dir=path[-10:], refresh=False) t.update(0) # may trigger a refresh return listing def recursively_search(path): if os.path.isdir(path): for f in list_found_dir(path): recursively_search(os.path.join(path, f)) else: append_found_file(path) recursively_search(path) t.set_postfix(dir=path) t.close() return files Using ``update(0)`` is a handy way to let ``tqdm`` decide when to trigger a display refresh to avoid console spamming. Writing messages ~~~~~~~~~~~~~~~~ This is a work in progress (see `#737 `__). Since ``tqdm`` uses a simple printing mechanism to display progress bars, you should not write any message in the terminal using ``print()`` while a progressbar is open. To write messages in the terminal without any collision with ``tqdm`` bar display, a ``.write()`` method is provided: .. code:: python from tqdm.auto import tqdm, trange from time import sleep bar = trange(10) for i in bar: # Print using tqdm class method .write() sleep(0.1) if not (i % 3): tqdm.write("Done task %i" % i) # Can also use bar.write() By default, this will print to standard output ``sys.stdout``. but you can specify any file-like object using the ``file`` argument. For example, this can be used to redirect the messages writing to a log file or class. Redirecting writing ~~~~~~~~~~~~~~~~~~~ If using a library that can print messages to the console, editing the library by replacing ``print()`` with ``tqdm.write()`` may not be desirable. In that case, redirecting ``sys.stdout`` to ``tqdm.write()`` is an option. To redirect ``sys.stdout``, create a file-like class that will write any input string to ``tqdm.write()``, and supply the arguments ``file=sys.stdout, dynamic_ncols=True``. A reusable canonical example is given below: .. code:: python from time import sleep import contextlib import sys from tqdm import tqdm from tqdm.contrib import DummyTqdmFile @contextlib.contextmanager def std_out_err_redirect_tqdm(): orig_out_err = sys.stdout, sys.stderr try: sys.stdout, sys.stderr = map(DummyTqdmFile, orig_out_err) yield orig_out_err[0] # Relay exceptions except Exception as exc: raise exc # Always restore sys.stdout/err if necessary finally: sys.stdout, sys.stderr = orig_out_err def some_fun(i): print("Fee, fi, fo,".split()[i]) # Redirect stdout to tqdm.write() (don't forget the `as save_stdout`) with std_out_err_redirect_tqdm() as orig_stdout: # tqdm needs the original stdout # and dynamic_ncols=True to autodetect console width for i in tqdm(range(3), file=orig_stdout, dynamic_ncols=True): sleep(.5) some_fun(i) # After the `with`, printing is restored print("Done!") Redirecting ``logging`` ~~~~~~~~~~~~~~~~~~~~~~~ Similar to ``sys.stdout``/``sys.stderr`` as detailed above, console ``logging`` may also be redirected to ``tqdm.write()``. Warning: if also redirecting ``sys.stdout``/``sys.stderr``, make sure to redirect ``logging`` first if needed. Helper methods are available in ``tqdm.contrib.logging``. For example: .. code:: python import logging from tqdm import trange from tqdm.contrib.logging import logging_redirect_tqdm LOG = logging.getLogger(__name__) if __name__ == '__main__': logging.basicConfig(level=logging.INFO) with logging_redirect_tqdm(): for i in trange(9): if i == 4: LOG.info("console logging redirected to `tqdm.write()`") # logging restored Monitoring thread, intervals and miniters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``tqdm`` implements a few tricks to increase efficiency and reduce overhead. - Avoid unnecessary frequent bar refreshing: ``mininterval`` defines how long to wait between each refresh. ``tqdm`` always gets updated in the background, but it will display only every ``mininterval``. - Reduce number of calls to check system clock/time. - ``mininterval`` is more intuitive to configure than ``miniters``. A clever adjustment system ``dynamic_miniters`` will automatically adjust ``miniters`` to the amount of iterations that fit into time ``mininterval``. Essentially, ``tqdm`` will check if it's time to print without actually checking time. This behaviour can be still be bypassed by manually setting ``miniters``. However, consider a case with a combination of fast and slow iterations. After a few fast iterations, ``dynamic_miniters`` will set ``miniters`` to a large number. When iteration rate subsequently slows, ``miniters`` will remain large and thus reduce display update frequency. To address this: - ``maxinterval`` defines the maximum time between display refreshes. A concurrent monitoring thread checks for overdue updates and forces one where necessary. The monitoring thread should not have a noticeable overhead, and guarantees updates at least every 10 seconds by default. This value can be directly changed by setting the ``monitor_interval`` of any ``tqdm`` instance (i.e. ``t = tqdm.tqdm(...); t.monitor_interval = 2``). The monitor thread may be disabled application-wide by setting ``tqdm.tqdm.monitor_interval = 0`` before instantiation of any ``tqdm`` bar. Merch ----- You can buy `tqdm branded merch `__ now! Contributions ------------- |GitHub-Commits| |GitHub-Issues| |GitHub-PRs| |OpenHub-Status| |GitHub-Contributions| |CII Best Practices| All source code is hosted on `GitHub `__. Contributions are welcome. See the `CONTRIBUTING `__ file for more information. Developers who have made significant contributions, ranked by *SLoC* (surviving lines of code, `git fame `__ ``-wMC --excl '\.(png|gif|jpg)$'``), are: ==================== ======================================================== ==== ================================ Name ID SLoC Notes ==================== ======================================================== ==== ================================ Casper da Costa-Luis `casperdcl `__ ~78% primary maintainer |Gift-Casper| Stephen Larroque `lrq3000 `__ ~10% team member Martin Zugnoni `martinzugnoni `__ ~4% Daniel Ecer `de-code `__ ~2% Richard Sheridan `richardsheridan `__ ~1% Guangshuo Chen `chengs `__ ~1% Kyle Altendorf `altendky `__ <1% Matthew Stevens `mjstevens777 `__ <1% Hadrien Mary `hadim `__ <1% team member Noam Yorav-Raphael `noamraph `__ <1% original author Mikhail Korobov `kmike `__ <1% team member ==================== ======================================================== ==== ================================ Ports to Other Languages ~~~~~~~~~~~~~~~~~~~~~~~~ A list is available on `this wiki page `__. LICENCE ------- Open Source (OSI approved): |LICENCE| Citation information: |DOI| |README-Hits| (Since 19 May 2016) .. |Logo| image:: https://img.tqdm.ml/logo.gif .. |Screenshot| image:: https://img.tqdm.ml/tqdm.gif .. |Video| image:: https://img.tqdm.ml/video.jpg :target: https://tqdm.github.io/video .. |Slides| image:: https://img.tqdm.ml/slides.jpg :target: https://tqdm.github.io/PyData2019/slides.html .. |Merch| image:: https://img.tqdm.ml/merch.jpg :target: https://tqdm.github.io/merch .. |Build-Status| image:: https://img.shields.io/github/workflow/status/tqdm/tqdm/Test/master?logo=GitHub :target: https://github.com/tqdm/tqdm/actions?query=workflow%3ATest .. |Coverage-Status| image:: https://img.shields.io/coveralls/github/tqdm/tqdm/master?logo=coveralls :target: https://coveralls.io/github/tqdm/tqdm .. |Branch-Coverage-Status| image:: https://codecov.io/gh/tqdm/tqdm/branch/master/graph/badge.svg :target: https://codecov.io/gh/tqdm/tqdm .. |Codacy-Grade| image:: https://app.codacy.com/project/badge/Grade/3f965571598f44549c7818f29cdcf177 :target: https://www.codacy.com/gh/tqdm/tqdm/dashboard .. |CII Best Practices| image:: https://bestpractices.coreinfrastructure.org/projects/3264/badge :target: https://bestpractices.coreinfrastructure.org/projects/3264 .. |GitHub-Status| image:: https://img.shields.io/github/tag/tqdm/tqdm.svg?maxAge=86400&logo=github&logoColor=white :target: https://github.com/tqdm/tqdm/releases .. |GitHub-Forks| image:: https://img.shields.io/github/forks/tqdm/tqdm.svg?logo=github&logoColor=white :target: https://github.com/tqdm/tqdm/network .. |GitHub-Stars| image:: https://img.shields.io/github/stars/tqdm/tqdm.svg?logo=github&logoColor=white :target: https://github.com/tqdm/tqdm/stargazers .. |GitHub-Commits| image:: https://img.shields.io/github/commit-activity/y/tqdm/tqdm.svg?logo=git&logoColor=white :target: https://github.com/tqdm/tqdm/graphs/commit-activity .. |GitHub-Issues| image:: https://img.shields.io/github/issues-closed/tqdm/tqdm.svg?logo=github&logoColor=white :target: https://github.com/tqdm/tqdm/issues?q= .. |GitHub-PRs| image:: https://img.shields.io/github/issues-pr-closed/tqdm/tqdm.svg?logo=github&logoColor=white :target: https://github.com/tqdm/tqdm/pulls .. |GitHub-Contributions| image:: https://img.shields.io/github/contributors/tqdm/tqdm.svg?logo=github&logoColor=white :target: https://github.com/tqdm/tqdm/graphs/contributors .. |GitHub-Updated| image:: https://img.shields.io/github/last-commit/tqdm/tqdm/master.svg?logo=github&logoColor=white&label=pushed :target: https://github.com/tqdm/tqdm/pulse .. |Gift-Casper| image:: https://img.shields.io/badge/dynamic/json.svg?color=ff69b4&label=gifts%20received&prefix=%C2%A3&query=%24..sum&url=https%3A%2F%2Fcaspersci.uk.to%2Fgifts.json :target: https://cdcl.ml/sponsor .. |Versions| image:: https://img.shields.io/pypi/v/tqdm.svg :target: https://tqdm.github.io/releases .. |PyPI-Downloads| image:: https://img.shields.io/pypi/dm/tqdm.svg?label=pypi%20downloads&logo=PyPI&logoColor=white :target: https://pepy.tech/project/tqdm .. |Py-Versions| image:: https://img.shields.io/pypi/pyversions/tqdm.svg?logo=python&logoColor=white :target: https://pypi.org/project/tqdm .. |Conda-Forge-Status| image:: https://img.shields.io/conda/v/conda-forge/tqdm.svg?label=conda-forge&logo=conda-forge :target: https://anaconda.org/conda-forge/tqdm .. |Snapcraft| image:: https://img.shields.io/badge/snap-install-82BEA0.svg?logo=snapcraft :target: https://snapcraft.io/tqdm .. |Docker| image:: https://img.shields.io/badge/docker-pull-blue.svg?logo=docker&logoColor=white :target: https://hub.docker.com/r/tqdm/tqdm .. |Libraries-Rank| image:: https://img.shields.io/librariesio/sourcerank/pypi/tqdm.svg?logo=koding&logoColor=white :target: https://libraries.io/pypi/tqdm .. |Libraries-Dependents| image:: https://img.shields.io/librariesio/dependent-repos/pypi/tqdm.svg?logo=koding&logoColor=white :target: https://github.com/tqdm/tqdm/network/dependents .. |OpenHub-Status| image:: https://www.openhub.net/p/tqdm/widgets/project_thin_badge?format=gif :target: https://www.openhub.net/p/tqdm?ref=Thin+badge .. |awesome-python| image:: https://awesome.re/mentioned-badge.svg :target: https://github.com/vinta/awesome-python .. |LICENCE| image:: https://img.shields.io/pypi/l/tqdm.svg :target: https://raw.githubusercontent.com/tqdm/tqdm/master/LICENCE .. |DOI| image:: https://img.shields.io/badge/DOI-10.5281/zenodo.595120-blue.svg :target: https://doi.org/10.5281/zenodo.595120 .. |binder-demo| image:: https://mybinder.org/badge_logo.svg :target: https://mybinder.org/v2/gh/tqdm/tqdm/master?filepath=DEMO.ipynb .. |Screenshot-Jupyter1| image:: https://img.tqdm.ml/jupyter-1.gif .. |Screenshot-Jupyter2| image:: https://img.tqdm.ml/jupyter-2.gif .. |Screenshot-Jupyter3| image:: https://img.tqdm.ml/jupyter-3.gif .. |README-Hits| image:: https://caspersci.uk.to/cgi-bin/hits.cgi?q=tqdm&style=social&r=https://github.com/tqdm/tqdm&l=https://img.tqdm.ml/favicon.png&f=https://img.tqdm.ml/logo.gif :target: https://caspersci.uk.to/cgi-bin/hits.cgi?q=tqdm&a=plot&r=https://github.com/tqdm/tqdm&l=https://img.tqdm.ml/favicon.png&f=https://img.tqdm.ml/logo.gif&style=social
from loguru import logger from tqdm import tqdm
logger.remove() logger.add(lambda msg: tqdm.write(msg, end=""), colorize=True)
logger.info("Initializing")
for x in tqdm(range(100)): logger.info("Iterating #{}", x) time.sleep(0.1)
# -*- coding: utf-8 -*- """Usage: 7zx.py [--help | options] ... Options: -h, --help Print this help and exit -v, --version Print version and exit -c, --compressed Use compressed (instead of uncompressed) file sizes -s, --silent Do not print one row per zip file -y, --yes Assume yes to all queries (for extraction) -D=, --debug= Print various types of debugging information. Choices: CRITICAL|FATAL ERROR WARN(ING) [default: INFO] DEBUG NOTSET -d, --debug-trace Print lots of debugging information (-D NOTSET) """ from __future__ import print_function import io import logging import os import pty import re import subprocess # nosec from argopt import argopt from tqdm import tqdm __author__ = "Casper da Costa-Luis " __licence__ = "MPLv2.0" __version__ = "0.2.2" __license__ = __licence__ RE_SCN = re.compile(r"([0-9]+)\s+([0-9]+)\s+(.*)$", flags=re.M) def main(): args = argopt(__doc__, version=__version__).parse_args() if args.debug_trace: args.debug = "NOTSET" logging.basicConfig(level=getattr(logging, args.debug, logging.INFO), format='%(levelname)s:%(message)s') log = logging.getLogger(__name__) log.debug(args) # Get compressed sizes zips = {} for fn in args.zipfiles: info = subprocess.check_output(["7z", "l", fn]).strip() # nosec finfo = RE_SCN.findall(info) # size|compressed|name # builtin test: last line should be total sizes log.debug(finfo) totals = map(int, finfo[-1][:2]) # log.debug(totals) for s in range(2): # size|compressed totals totals_s = sum(map(int, (inf[s] for inf in finfo[:-1]))) if totals_s != totals[s]: log.warn("%s: individual total %d != 7z total %d", fn, totals_s, totals[s]) fcomp = {n: int(c if args.compressed else u) for (u, c, n) in finfo[:-1]} # log.debug(fcomp) # zips : {'zipname' : {'filename' : int(size)}} zips[fn] = fcomp # Extract cmd7zx = ["7z", "x", "-bd"] if args.yes: cmd7zx += ["-y"] log.info("Extracting from %d file(s)", len(zips)) with tqdm(total=sum(sum(fcomp.values()) for fcomp in zips.values()), unit="B", unit_scale=True) as tall: for fn, fcomp in zips.items(): md, sd = pty.openpty() ex = subprocess.Popen( # nosec cmd7zx + [fn], bufsize=1, stdout=md, # subprocess.PIPE, stderr=subprocess.STDOUT) os.close(sd) with io.open(md, mode="rU", buffering=1) as m: with tqdm(total=sum(fcomp.values()), disable=len(zips) < 2, leave=False, unit="B", unit_scale=True) as t: if not hasattr(t, "start_t"): # disabled t.start_t = tall._time() while True: try: l_raw = m.readline() except IOError: break ln = l_raw.strip() if ln.startswith("Extracting"): exname = ln[len("Extracting"):].lstrip() s = fcomp.get(exname, 0) # 0 is likely folders t.update(s) tall.update(s) elif ln: if not any( ln.startswith(i) for i in ("7-Zip ", "p7zip Version ", "Everything is Ok", "Folders: ", "Files: ", "Size: ", "Compressed: ")): if ln.startswith("Processing archive: "): if not args.silent: t.write(t.format_interval( t.start_t - tall.start_t) + ' ' + ln.replace("Processing archive: ", "")) else: t.write(ln) ex.wait() main.__doc__ = __doc__ if __name__ == "__main__": main()
from __future__ import print_function import sys from concurrent.futures import ThreadPoolExecutor from functools import partial from multiprocessing import Pool, RLock, freeze_support from random import random from threading import RLock as TRLock from time import sleep from tqdm.auto import tqdm, trange from tqdm.contrib.concurrent import process_map, thread_map NUM_SUBITERS = 9 PY2 = sys.version_info[:1] <= (2,) def progresser(n, auto_position=True, write_safe=False, blocking=True, progress=False): interval = random() * 0.002 / (NUM_SUBITERS - n + 2) # nosec total = 5000 text = "#{0}, est. {1:<04.2}s".format(n, interval * total) for _ in trange(total, desc=text, disable=not progress, lock_args=None if blocking else (False,), position=None if auto_position else n): sleep(interval) # NB: may not clear instances with higher `position` upon completion # since this worker may not know about other bars #796 if write_safe: # we think we know about other bars (currently only py3 threading) if n == 6: tqdm.write("n == 6 completed") return n + 1 if __name__ == '__main__': freeze_support() # for Windows support L = list(range(NUM_SUBITERS))[::-1] print("Simple thread mapping") thread_map(partial(progresser, write_safe=not PY2), L, max_workers=4) print("Simple process mapping") process_map(partial(progresser), L, max_workers=4) print("Manual nesting") for i in trange(16, desc="1"): for _ in trange(16, desc="2 @ %d" % i, leave=i % 2): sleep(0.01) print("Multi-processing") tqdm.set_lock(RLock()) p = Pool(initializer=tqdm.set_lock, initargs=(tqdm.get_lock(),)) p.map(partial(progresser, progress=True), L) print("Multi-threading") tqdm.set_lock(TRLock()) pool_args = {} if not PY2: pool_args.update(initializer=tqdm.set_lock, initargs=(tqdm.get_lock(),)) with ThreadPoolExecutor(**pool_args) as p: p.map(partial(progresser, progress=True, write_safe=not PY2, blocking=False), L)
""" Inserting `tqdm` as a "pipe" in a chain of coroutines. Not to be confused with `asyncio.coroutine`. """ from functools import wraps from tqdm.auto import tqdm def autonext(func): @wraps(func) def inner(*args, **kwargs): res = func(*args, **kwargs) next(res) return res return inner @autonext def tqdm_pipe(target, **tqdm_kwargs): """ Coroutine chain pipe `send()`ing to `target`. This: >>> r = receiver() >>> p = producer(r) >>> next(r) >>> next(p) Becomes: >>> r = receiver() >>> t = tqdm.pipe(r) >>> p = producer(t) >>> next(r) >>> next(p) """ with tqdm(**tqdm_kwargs) as pbar: while True: obj = (yield) target.send(obj) pbar.update() def source(target): for i in ["foo", "bar", "baz", "pythonista", "python", "py"]: target.send(i) target.close() @autonext def grep(pattern, target): while True: line = (yield) if pattern in line: target.send(line) @autonext def sink(): while True: line = (yield) tqdm.write(line) if __name__ == "__main__": source( tqdm_pipe( grep('python', sink())))
Trending Discussions on tqdm
Trending Discussions on tqdm
QUESTION
I have a tqdm
progressbar. I set the postfix string using the method set_postfix_str
in some part of my code. In another part, I need to append to this string. Here is an MWE.
import numpy as np
from tqdm import tqdm
a = np.random.randint(0, 10, 10)
loop_obj = tqdm(np.arange(10))
for i in loop_obj:
loop_obj.set_postfix_str(f"Current count: {i}")
a = i*2/3 # Do some operations
loop_obj.set_postfix_str(f"After processing: {a}") # clears the previous string
# What I want
loop_obj.set_postfix_str(f"Current count: {i}After processing: {a}")
Is there a way to append to the already set string using set_postfix_str
?
ANSWER
Answered 2022-Apr-08 at 13:57You could just append the new postfix to the old one like so:
import numpy as np
from tqdm import tqdm
a = np.random.randint(0, 10, 10)
loop_obj = tqdm(np.arange(10))
for i in loop_obj:
loop_obj.set_postfix_str(f"Current count: {i}")
a = i*2/3 # Do some operations
loop_obj.set_postfix_str(loop_obj.postfix + f" After processing: {a}")
QUESTION
My question is somehow similiar to this one: How to save out in a new column the url which is reading pandas read_html() function?
I have a set of links that contain tables (4 tables each and I need only first three of them). The goal is to store the link of each table in the separate 'address' column.
links = ['www.link1.com', 'www.link2.com', ... , 'www.linkx.com']
details = []
for link in tqdm(links):
page = requests.get(link)
sauce = BeautifulSoup(page.content, 'lxml')
table = sauce.find_all('table')
# Only first 3 tables include data
for i in range(3):
details.append(pd.read_html(str(table))[i])
final_df = pd.concat(details, ignore_index=True)
final_df['address'] = link
time.sleep(2)
However, when I use this code, only the last link is assigned to every row in the 'address' column.
I'm probably missing a detail but spent last 2 hours figuring that out and simply can't make any progress - would really appreciate some help.
ANSWER
Answered 2022-Mar-31 at 13:28You are close to your goal - Add df['address']
in each iteration to your DataFrame
before appending it to your list:
for i in table[:3]:
df = pd.read_html(str(i))[0]
df['address'] = link
details.append(df)
Note You could also slice your ResultSet
of tables table[:3]
so you do not have to use range
Move the concatination outside of your loop and call it ones if your iterations are over:
final_df = pd.concat(details, ignore_index=True)
import pandas as pd
links = ['www.link1.com', 'www.link2.com','www.linkx.com']
details = []
for link in links:
# page = requests.get(link)
# sauce = BeautifulSoup(page.content, 'lxml')
# table = sauce.find_all('table')
table = ['table 1',
'table 2',
'table 3']
# Only first 3 tables include data
for i in table[:3]:
df = pd.read_html(str(i))[0]
df['address'] = link
details.append(df)
final_df = pd.concat(details, ignore_index=True)
QUESTION
In a model with an embedding layer and SimpleRNN layer, I would like to compute the partial derivative dh_t/dh_0 for each step t.
The structure of my model, including imports and data preprocessing.
Toxic comment train data available: https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/data?select=jigsaw-toxic-comment-train.csv
GloVe 6B 100d embeddings available: https://nlp.stanford.edu/projects/glove/
### 1. Imports
from __future__ import print_function
import numpy as np
from numpy import array, asarray, zeros
import pandas as pd
from tqdm import tqdm
from sklearn.model_selection import train_test_split
import tensorflow as tf
from keras import Input, Model
from keras.models import Sequential
from keras.layers.recurrent import LSTM, GRU,SimpleRNN
from keras.layers.core import Dense, Activation, Dropout, Flatten
from keras.layers.embeddings import Embedding
from tensorflow.keras.layers import BatchNormalization, PReLU
from sklearn import preprocessing, decomposition, model_selection, metrics, pipeline
from keras.preprocessing import sequence, text
from keras import backend as k
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
### 2. Text data tokenisation and GloVe-100d embeddings:
def data_pp():
train= pd.read_csv('/Users/Toxic comment data/jigsaw-toxic-comment-train.csv') train.drop(['severe_toxic','obscene','threat','insult','identity_hate'],axis=1,inplace=True)
train= train.iloc[:12000,:]
xtr, xte, ytr, yte= train_test_split(train['comment_text'].values,
train['toxic'].values,
stratify= train['toxic'].values,
random_state= 42, test_size= 0.2, shuffle= True)
# Tokenise data
tok= text.Tokenizer(num_words= None)
tok.fit_on_texts(list(xtr)+ list(xte))
input_dim= len(tok.word_index)+1
input_length= train['comment_text'].apply(lambda x: len(str(x).split())).max()
xtr_seq= tok.texts_to_sequences(xtr); xte_seq= tok.texts_to_sequences(xte)
xtr_pad= sequence.pad_sequences(xtr_seq, maxlen= input_length)
xte_pad= sequence.pad_sequences(xte_seq, maxlen= input_length)
print('Shape of tokenised training input:', xtr_pad.shape)
return xtr_pad, ytr, xte_pad, yte, input_dim, input_length, tok
xtr_pad, ytr, xte_pad, yte, input_dim, input_length, tok= data_pp()
# Word embeddings
def embed_mat(input_dim, output_dim, tok):
'''By default output_dim = 100 for GloVe 100d embeddings'''
embedding_dict=dict()
f= open('/Users/GloVe/glove.6B.100d.txt')
for line in f:
values= line.split()
word= values[0]; coefs= asarray(values[1:], dtype= 'float32')
embedding_dict[word]= coefs
f.close()
Emat= zeros((input_dim, output_dim))
for word, i in tok.word_index.items():
embedding_vector= embedding_dict.get(word)
if embedding_vector is not None:
Emat[i]= embedding_vector
print('Embedding weight matrix has shape:', Emat.shape)
return Emat
output_dim = 100
Emat= embed_mat(input_dim, output_dim, took)
### 3. Define model and compute gradients:
# You can let it run for a few steps and stop the process. Then inspect the first step h_t, h_0 and the computed dh_t/dh_0.
# For the case in my comment, you can remove the for-loop over the steps t, comment out ht, and compute tape.gradient(states, h0) instead.
batch_size = 100
inp= Input(batch_shape= (batch_size, input_length), name= 'input')
emb_out= Embedding(input_dim, output_dim, input_length= input_length,
weights= [Emat], trainable= False, name= 'embedding')(inp)
rnn= SimpleRNN(200, return_sequences= True, return_state= False, stateful= True, name= 'simpleRNN')
h0 = tf.convert_to_tensor(np.random.uniform(size= (batch_size, 200)).astype(np.float32))
rnn_allstates= rnn(emb_out, initial_state=h0)
model_rnn = Model(inputs=inp, outputs= rnn_allstates, name= 'model_rnn')
model_rnn.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
ds = tf.data.Dataset.from_tensor_slices((xtr_pad[:100], ytr[:100])).batch(100)
embedding_layer = model_rnn.layers[1]
rnn_layer = model_rnn.layers[2]
grads_allsteps= []
for b, (x_batch_train, y_batch_train) in enumerate(ds):
for t in range(input_length):
with tf.GradientTape() as tape:
tape.watch(h0)
et = embedding_layer(x_batch_train)
states = rnn_layer(et, initial_state= h0) # (100, 1403, 200)
ht = states[:,t,:]
grad_t= tape.gradient(ht, h0) # (100, 200)
print('Computed gradient dht/dh0 at step ', t+1, 'in batch', b+1)
grads_allsteps.append(grad_t)
At each step t, h_t has shape (100,200), h_0 has shape (100,200). However tape.gradient(ht, h0)
returns None for every t. Below is the result of the first step:
for t in range(1):
with tf.GradientTape() as tape:
tape.watch(h0)
et = embedding_layer(x_batch_train)
#tape.watch(et)
states = rnn_layer(et, initial_state= h0) # (100, 1403, 200)
ht = states[:,t,:]
print(ht)
print(h0)
grad_t = tape.gradient(ht, h0)
tf.print(grad_t)
>>
# h_t:
tf.Tensor(
[[ 0.25634336 0.5259362 0.60045886 ... -0.4978792 0.62755316
0.09803997]
[ 0.58387524 0.26037565 0.5646103 ... 0.31233114 0.4853201
0.10877549]
[ 0.17190906 0.68681747 -0.32054633 ... -0.6139967 0.48944488
0.06301598]
...
[ 0.1985917 -0.11821499 -0.47709295 ... -0.05718012 0.16089934
0.20585683]
[ 0.73872745 0.503326 0.25224414 ... -0.5771631 0.03748894
0.09212588]
[-0.6597108 -0.43926442 -0.23546427 ... 0.26760277 0.28221437
-0.4039318 ]], shape=(100, 200), dtype=float32)
# h_0:
tf.Tensor(
[[0.51580787 0.51664346 0.70773274 ... 0.45973232 0.7760376 0.48297063]
[0.61048764 0.26038417 0.60392565 ... 0.7426153 0.15507504 0.57494944]
[0.11859739 0.33591187 0.68375146 ... 0.59409297 0.5302879 0.28876984]
...
[0.12401487 0.39376178 0.9850304 ... 0.21582918 0.9592233 0.5257605 ]
[0.9401199 0.2157638 0.6445949 ... 0.36316434 0.5799403 0.3749675 ]
[0.37230062 0.18162128 0.0739954 ... 0.21624395 0.66291 0.7807376 ]], shape=(100, 200), dtype=float32)
# dh_t/dh_0:
None
There seems to be some difficulty for Gradient tape to watch this h_0, and perform gradient computation. I have successfully used GradientTape watch the inputs e_t to the RNN layer, and computed the gradients dh_t/de_t, but this does not really provide much information about the quality of model fitting.
How can I use it to watch the fixed-time quantity h_0, and thus compute the gradient dh_t/dh_0? Thanks in advance for any help.
Reproducible test case:
### 1. Imports
from __future__ import print_function
import numpy as np
from numpy import array, asarray, zeros
import pandas as pd
from tqdm import tqdm
from sklearn.model_selection import train_test_split
import tensorflow as tf
from keras import Input, Model
from keras.models import Sequential
from keras.layers.recurrent import LSTM, GRU,SimpleRNN
from keras.layers.core import Dense, Activation, Dropout, Flatten
from keras.layers.embeddings import Embedding
from tensorflow.keras.layers import BatchNormalization, PReLU
from sklearn import preprocessing, decomposition, model_selection, metrics, pipeline
from keras.preprocessing import sequence, text
from keras import backend as k
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
### 2. Simulated data and gradient computation:
batch_size = 100; input_length = 5
xtr_pad = tf.random.uniform((batch_size, input_length), maxval = 500, dtype=tf.int32)
ytr = tf.random.normal((batch_size, input_length, 200))
inp= Input(batch_shape= (batch_size, input_length), name= 'input')
emb_out= Embedding(500, 100, input_length= input_length, trainable= False, name= 'embedding')(inp)
rnn= SimpleRNN(200, return_sequences= True, return_state= False, stateful= True, name= 'simpleRNN')
h0 = tf.convert_to_tensor(np.random.uniform(size= (batch_size, 200)).astype(np.float32))
rnn_allstates= rnn(emb_out, initial_state=h0)
model_rnn = Model(inputs=inp, outputs= rnn_allstates, name= 'model_rnn')
model_rnn.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
ds = tf.data.Dataset.from_tensor_slices((xtr_pad, ytr)).batch(100)
embedding_layer = model_rnn.layers[1]
rnn_layer = model_rnn.layers[2]
grads_allsteps= []
for b, (x_batch_train, y_batch_train) in enumerate(ds):
for t in range(input_length):
with tf.GradientTape() as tape:
tape.watch(h0)
states= model_rnn(x_batch_train)
ht = states[:,t,:]
grad_t= tape.gradient(ht, h0)
print('Computed gradient dht/dh0 at step ', t+1, 'in batch', b+1)
grads_allsteps.append(grad_t)
Something interesting: the first-step gradient is computed and looks fine. The rest are Nones.
grads_allsteps
>>
[, None, None, None, None]
ANSWER
Answered 2022-Feb-18 at 14:02You could maybe try using tf.gradients
. Also rather use tf.Variable
for h0
:
# Your imports
#-------
### 2. Simulated data and gradient computation:
batch_size = 100; input_length = 5
xtr_pad = tf.random.uniform((batch_size, input_length), maxval = 500, dtype=tf.int32)
ytr = tf.random.normal((batch_size, input_length, 200))
inp= Input(batch_shape= (batch_size, input_length), name= 'input')
emb_out= Embedding(500, 100, input_length= input_length, trainable= False, name= 'embedding')(inp)
rnn= SimpleRNN(200, return_sequences= True, return_state= False, stateful= True, name= 'simpleRNN')
h0 = tf.Variable(tf.random.uniform((batch_size, 200)))
rnn_allstates= rnn(emb_out, initial_state=h0)
model_rnn = Model(inputs=inp, outputs= rnn_allstates, name= 'model_rnn')
model_rnn.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
ds = tf.data.Dataset.from_tensor_slices((xtr_pad, ytr)).batch(100)
embedding_layer = model_rnn.layers[1]
rnn_layer = model_rnn.layers[2]
@tf.function
def calculate_t_gradients(t, x, h0):
return tf.gradients(model_rnn(x)[:,t,:], h0)
grads_allsteps= []
for b, (x_batch_train, y_batch_train) in enumerate(ds):
for t in range(input_length):
grads_allsteps.append(calculate_t_gradients(t, x_batch_train, h0))
print(grads_allsteps)
[[], [], [], [], []]
You need to make sure the stateful
parameter of the SimpleRNN
is False
, because according to the docs:
If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
So, your code will also calculate gradients for each timestep if you set stateful
to False
.
QUESTION
I'm trying to install eth-brownie using 'pipx install eth-brownie' but I get an error saying
pip failed to build package: cytoolz
Some possibly relevant errors from pip install:
build\lib.win-amd64-3.10\cytoolz\functoolz.cp310-win_amd64.pyd : fatal error LNK1120: 1 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\link.exe' failed with exit code 1120
I've had a look at the log file and it shows that it failed to build cytoolz. It also mentions "ALERT: Cython not installed. Building without Cython.". From my limited understanding Cytoolz is apart of Cython so i think the reason why the installation for eth-brownie failed is because it could not build cytoolz as it was trying to build it without Cython. The thing is I already have cython installed:
C:\Users\alaiy>pip install cython
Requirement already satisfied: cython in c:\python310\lib\site-packages (0.29.24)
Extract from the log file (I can paste the whole thing but its lengthy):
Building wheels for collected packages: bitarray, cytoolz, lru-dict, parsimonious, psutil, pygments-lexer-solidity, varint, websockets, wrapt
Building wheel for bitarray (setup.py): started
Building wheel for bitarray (setup.py): finished with status 'done'
Created wheel for bitarray: filename=bitarray-1.2.2-cp310-cp310-win_amd64.whl size=55783 sha256=d4ae97234d659ed9ff1f0c0201e82c7e321bd3f4e122f6c2caee225172e7bfb2
Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\1d\29\a8\5364620332cc833df35535f54074cf1e51f94d07d2a660bd6d
Building wheel for cytoolz (setup.py): started
Building wheel for cytoolz (setup.py): finished with status 'error'
Running setup.py clean for cytoolz
Building wheel for lru-dict (setup.py): started
Building wheel for lru-dict (setup.py): finished with status 'done'
Created wheel for lru-dict: filename=lru_dict-1.1.7-cp310-cp310-win_amd64.whl size=12674 sha256=6a7e7b2068eb8481650e0a2ae64c94223b3d2c018f163c5a0e7c1d442077450a
Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\47\0a\dc\b156cb52954bbc1c31b4766ca3f0ed9eae9b218812bca89d7b
Building wheel for parsimonious (setup.py): started
Building wheel for parsimonious (setup.py): finished with status 'done'
Created wheel for parsimonious: filename=parsimonious-0.8.1-py3-none-any.whl size=42724 sha256=f9235a9614af0f5204d6bb35b8bd30b9456eae3021b5c2a9904345ad7d07a49d
Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\b1\12\f1\7a2f39b30d6780ae9f2be9a52056595e0d97c1b4531d183085
Building wheel for psutil (setup.py): started
Building wheel for psutil (setup.py): finished with status 'done'
Created wheel for psutil: filename=psutil-5.8.0-cp310-cp310-win_amd64.whl size=246135 sha256=834ab1fd1dd0c18e574fc0fbf07922e605169ac68be70b8a64fb90c49ad4ae9b
Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\12\a3\6d\615295409067d58a62a069d30d296d61d3ac132605e3a9555c
Building wheel for pygments-lexer-solidity (setup.py): started
Building wheel for pygments-lexer-solidity (setup.py): finished with status 'done'
Created wheel for pygments-lexer-solidity: filename=pygments_lexer_solidity-0.7.0-py3-none-any.whl size=7321 sha256=46355292f790d07d941a745cd58b64c5592e4c24357f7cc80fe200c39ab88d32
Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\36\fd\bc\6ff4fe156d46016eca64c9652a1cd7af6411070c88acbeabf5
Building wheel for varint (setup.py): started
Building wheel for varint (setup.py): finished with status 'done'
Created wheel for varint: filename=varint-1.0.2-py3-none-any.whl size=1979 sha256=36b744b26ba7534a494757e16ab6e171d9bb60a4fe4663557d57034f1150b678
Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\39\48\5e\33919c52a2a695a512ca394a5308dd12626a40bbcd288de814
Building wheel for websockets (setup.py): started
Building wheel for websockets (setup.py): finished with status 'done'
Created wheel for websockets: filename=websockets-9.1-cp310-cp310-win_amd64.whl size=91765 sha256=a00a9c801269ea2b86d72c0b0b654dc67672519721afeac8f912a157e52901c0
Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\79\f7\4e\873eca27ecd6d7230caff265283a5a5112ad4cd1d945c022dd
Building wheel for wrapt (setup.py): started
Building wheel for wrapt (setup.py): finished with status 'done'
Created wheel for wrapt: filename=wrapt-1.12.1-cp310-cp310-win_amd64.whl size=33740 sha256=ccd729b6e3915164ac4994aef731f21cd232466b3f6c4823c9fda14b07e821c3
Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\8e\61\d3\d9e7053100177668fa43216a8082868c55015f8706abd974f2
Successfully built bitarray lru-dict parsimonious psutil pygments-lexer-solidity varint websockets wrapt
Failed to build cytoolz
Installing collected packages: toolz, eth-typing, eth-hash, cytoolz, six, pyparsing, eth-utils, varint, urllib3, toml, rlp, pyrsistent, pycryptodome, py, pluggy, parsimonious, packaging, netaddr, multidict, iniconfig, idna, hexbytes, eth-keys, colorama, charset-normalizer, certifi, base58, attrs, atomicwrites, yarl, typing-extensions, requests, python-dateutil, pytest, multiaddr, jsonschema, inflection, eth-rlp, eth-keyfile, eth-abi, chardet, bitarray, async-timeout, websockets, wcwidth, tomli, sortedcontainers, semantic-version, regex, pywin32, pytest-forked, pyjwt, pygments, protobuf, platformdirs, pathspec, mythx-models, mypy-extensions, lru-dict, ipfshttpclient, execnet, eth-account, dataclassy, click, asttokens, aiohttp, wrapt, web3, vyper, vvm, tqdm, pyyaml, pythx, python-dotenv, pytest-xdist, pygments-lexer-solidity, py-solc-x, py-solc-ast, psutil, prompt-toolkit, lazy-object-proxy, hypothesis, eth-event, eip712, black, eth-brownie
Running setup.py install for cytoolz: started
Running setup.py install for cytoolz: finished with status 'error'
PIP STDERR
----------
WARNING: The candidate selected for download or install is a yanked version: 'protobuf' candidate (version 3.18.0 at https://files.pythonhosted.org/packages/74/4e/9f3cb458266ef5cdeaa1e72a90b9eda100e3d1803cbd7ec02f0846da83c3/protobuf-3.18.0-py2.py3-none-any.whl#sha256=615099e52e9fbc9fde00177267a94ca820ecf4e80093e390753568b7d8cb3c1a (from https://pypi.org/simple/protobuf/))
Reason for being yanked: This version claims to support Python 2 but does not
ERROR: Command errored out with exit status 1:
command: 'C:\Users\alaiy\.local\pipx\venvs\eth-brownie\Scripts\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\alaiy\\AppData\\Local\\Temp\\pip-install-d1bskwa2\\cytoolz_f765f335272241adba2138f1920a35cd\\setup.py'"'"'; __file__='"'"'C:\\Users\\alaiy\\AppData\\Local\\Temp\\pip-install-d1bskwa2\\cytoolz_f765f335272241adba2138f1920a35cd\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\alaiy\AppData\Local\Temp\pip-wheel-pxzumeav'
cwd: C:\Users\alaiy\AppData\Local\Temp\pip-install-d1bskwa2\cytoolz_f765f335272241adba2138f1920a35cd\
Complete output (70 lines):
ALERT: Cython not installed. Building without Cython.
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.10
creating build\lib.win-amd64-3.10\cytoolz
copying cytoolz\compatibility.py -> build\lib.win-amd64-3.10\cytoolz
copying cytoolz\utils_test.py -> build\lib.win-amd64-3.10\cytoolz
Any help would be appreciated!
Edit: Found a solution. Cython appears to not be supported on Python 3.10 (ref https://github.com/eth-brownie/brownie/issues/1300 and https://github.com/cython/cython/issues/4046). I downgraded to Python 3.9.7 and eth-brownie installation worked!)
ANSWER
Answered 2022-Jan-02 at 09:59I used pip install eth-brownie and it worked fine, I didnt need to downgrade. Im new to this maybe I could be wrong but it worked fine with me.
QUESTION
I'm learning about policy gradients and I'm having hard time understanding how does the gradient passes through a random operation. From here: It is not possible to directly backpropagate through random samples. However, there are two main methods for creating surrogate functions that can be backpropagated through
.
They have an example of the score function
:
probs = policy_network(state)
# Note that this is equivalent to what used to be called multinomial
m = Categorical(probs)
action = m.sample()
next_state, reward = env.step(action)
loss = -m.log_prob(action) * reward
loss.backward()
Which I tried to create an example of:
import torch
import torch.nn as nn
import torch.optim as optim
from torch.distributions import Normal
import matplotlib.pyplot as plt
from tqdm import tqdm
softplus = torch.nn.Softplus()
class Model_RL(nn.Module):
def __init__(self):
super(Model_RL, self).__init__()
self.fc1 = nn.Linear(1, 20)
self.fc2 = nn.Linear(20, 30)
self.fc3 = nn.Linear(30, 2)
def forward(self, x):
x1 = self.fc1(x)
x = torch.relu(x1)
x2 = self.fc2(x)
x = torch.relu(x2)
x3 = softplus(self.fc3(x))
return x3, x2, x1
# basic
net_RL = Model_RL()
features = torch.tensor([1.0])
x = torch.tensor([1.0])
y = torch.tensor(3.0)
baseline = 0
baseline_lr = 0.1
epochs = 3
opt_RL = optim.Adam(net_RL.parameters(), lr=1e-3)
losses = []
xs = []
for _ in tqdm(range(epochs)):
out_RL = net_RL(x)
mu, std = out_RL[0]
dist = Normal(mu, std)
print(dist)
a = dist.sample()
log_p = dist.log_prob(a)
out = features * a
reward = -torch.square((y - out))
baseline = (1-baseline_lr)*baseline + baseline_lr*reward
loss = -(reward-baseline)*log_p
opt_RL.zero_grad()
loss.backward()
opt_RL.step()
losses.append(loss.item())
This seems to work magically fine which again, I don't understand how the gradient passes through as they mentioned that it can't pass through the random operation (but then somehow it does).
Now since the gradient can't flow through the random operation I tried to replace mu, std = out_RL[0]
with mu, std = out_RL[0].detach()
and that caused the error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
. If the gradient doesn't pass through the random operation, I don't understand why would detaching a tensor before the operation matter.
ANSWER
Answered 2021-Nov-30 at 05:48It is indeed true that sampling is not a differentiable operation per se. However, there exist two (broad) ways to mitigate this - [1] The REINFORCE way and [2] The reparameterization way. Since your example is related to [1], I will stick my answer to REINFORCE.
What REINFORCE does is it entirely gets rid of sampling operation in the computation graph. However, the sampling operation remains outside the graph. So, your statement
.. how does the gradient passes through a random operation ..
isn't correct. It does not pass through any random operation. Let's see your example
mu, std = out_RL[0]
dist = Normal(mu, std)
a = dist.sample()
log_p = dist.log_prob(a)
Computation of a
does not involve creating a computation graph. It is technically equivalent to plugging in some offline data from a dataset (as in supervised learning)
mu, std = out_RL[0]
dist = Normal(mu, std)
# a = dist.sample()
a = torch.tensor([1.23, 4.01, -1.2, ...], device='cuda')
log_p = dist.log_prob(a)
Since we don't have offline data beforehand, we create them on the fly and the .sample()
method does merely that.
So, there is no random operation on the graph. The log_p
depends on mu
and std
deterministically, just like any standard computation graph. If you cut the connection like this
mu, std = out_RL[0].detach()
.. of course it is going to complaint.
Also, do not get confused by this operation
dist = Normal(mu, std)
log_p = dist.log_prob(a)
as it does not contain any randomness by itself. This is merely a shortcut for writing the tedious log-likelihood formula for Normal
distribution.
QUESTION
I want to download/scrape 50 million log records from a site. Instead of downloading 50 million in one go, I was trying to download it in parts like 10 million at a time using the following code but it's only handling 20,000 at a time (more than that throws an error) so it becomes time-consuming to download that much data. Currently, it takes 3-4 mins to download 20,000 records with the speed of 100%|██████████| 20000/20000 [03:48<00:00, 87.41it/s]
so how to speed it up?
import asyncio
import aiohttp
import time
import tqdm
import nest_asyncio
nest_asyncio.apply()
async def make_numbers(numbers, _numbers):
for i in range(numbers, _numbers):
yield i
n = 0
q = 10000000
async def fetch():
# example
url = "https://httpbin.org/anything/log?id="
async with aiohttp.ClientSession() as session:
post_tasks = []
# prepare the coroutines that poat
async for x in make_numbers(n, q):
post_tasks.append(do_get(session, url, x))
# now execute them all at once
responses = [await f for f in tqdm.tqdm(asyncio.as_completed(post_tasks), total=len(post_tasks))]
async def do_get(session, url, x):
headers = {
'Content-Type': "application/x-www-form-urlencoded",
'Access-Control-Allow-Origin': "*",
'Accept-Encoding': "gzip, deflate",
'Accept-Language': "en-US"
}
async with session.get(url + str(x), headers=headers) as response:
data = await response.text()
print(data)
s = time.perf_counter()
try:
loop = asyncio.get_event_loop()
loop.run_until_complete(fetch())
except:
print("error")
elapsed = time.perf_counter() - s
# print(f"{__file__} executed in {elapsed:0.2f} seconds.")
Traceback (most recent call last):
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 986, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 1056, in create_connection
raise exceptions[0]
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 1041, in create_connection
sock = await self._connect_sock(
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 955, in _connect_sock
await self.sock_connect(sock, address)
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 702, in sock_connect
return await self._proactor.connect(sock, address)
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\tasks.py", line 328, in __wakeup
future.result()
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\windows_events.py", line 812, in _poll
value = callback(transferred, key, ov)
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\windows_events.py", line 599, in finish_connect
ov.getresult()
OSError: [WinError 121] The semaphore timeout period has expired
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\SGM\Desktop\xnet\x3stackoverflow.py", line 136, in
loop.run_until_complete(fetch())
File "C:\Users\SGM\AppData\Roaming\Python\Python39\site-packages\nest_asyncio.py", line 81, in run_until_complete
return f.result()
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\futures.py", line 201, in result
raise self._exception
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\tasks.py", line 256, in __step
result = coro.send(None)
File "C:\Users\SGM\Desktop\xnet\x3stackoverflow.py", line 88, in fetch
response = await f
File "C:\Users\SGM\Desktop\xnet\x3stackoverflow.py", line 37, in _wait_for_one
return f.result()
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\futures.py", line 201, in result
raise self._exception
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\tasks.py", line 258, in __step
result = coro.throw(exc)
File "C:\Users\SGM\Desktop\xnet\x3stackoverflow.py", line 125, in do_get
async with session.get(url + str(x), headers=headers) as response:
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\client.py", line 1138, in __aenter__
self._resp = await self._coro
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\client.py", line 535, in _request
conn = await self._connector.connect(
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 542, in connect
proto = await self._create_connection(req, traces, timeout)
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 907, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 1206, in _create_direct_connection
raise last_exc
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 1175, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 992, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host example.com:80 ssl:default [The semaphore timeout period has expired]
ANSWER
Answered 2022-Feb-27 at 14:37If it's not the bandwidth that limits you (but I cannot check this), there is a solution less complicated than the celery and rabbitmq but it is not as scalable as the celery and rabbitmq, it will be limited by your number of CPU.
Instead of splitting calls on celery workers, you split them on multiple processes.
I modified the fetch
function like this:
async def fetch(start, end):
# example
url = "https://httpbin.org/anything/log?id="
async with aiohttp.ClientSession() as session:
post_tasks = []
# prepare the coroutines that poat
# use start and end arguments here!
async for x in make_numbers(start, end):
post_tasks.append(do_get(session, url, x))
# now execute them all at once
responses = [await f for f in
tqdm.tqdm(asyncio.as_completed(post_tasks), total=len(post_tasks))]
and I modified the main processes:
import concurrent.futures
from itertools import count
def one_executor(start, end):
loop = asyncio.new_event_loop()
try:
loop.run_until_complete(fetch(start, end))
except:
print("error")
if __name__ == '__main__':
s = time.perf_counter()
# Change the value to the number of core you want to use.
max_worker = 4
length_by_executor = q // max_worker
with concurrent.futures.ProcessPoolExecutor(max_workers=max_worker) as executor:
for index_min in count(0, length_by_executor):
# no matter with duplicated indexes due to the use of
# range in make_number function.
index_max = min(index_min + length_by_executor, q)
executor.submit(one_executor, index_min, index_max)
if index_max == q:
break
elapsed = time.perf_counter() - s
print(f"executed in {elapsed:0.2f} seconds.")
Here the result I get (with the value of q
set to 10_000
):
1 worker: executed in 13.90 seconds.
2 workers: executed in 7.24 seconds.
3 workers: executed in 6.82 seconds.
I don't work on the tqdm
progress bar, with the current solution, two bars will be displayed (but I think tqdm works well with multi processes).
QUESTION
Goal: to run this Auto Labelling Notebook on AWS SageMaker Jupyter Labs.
Kernels tried: conda_pytorch_p36
, conda_python3
, conda_amazonei_mxnet_p27
.
! pip install farm-haystack -q
# Install the latest master of Haystack
!pip install grpcio-tools==1.34.1 -q
!pip install git+https://github.com/deepset-ai/haystack.git -q
!wget --no-check-certificate https://dl.xpdfreader.com/xpdf-tools-linux-4.03.tar.gz
!tar -xvf xpdf-tools-linux-4.03.tar.gz && sudo cp xpdf-tools-linux-4.03/bin64/pdftotext /usr/local/bin
!pip install git+https://github.com/deepset-ai/haystack.git -q
# Here are the imports we need
from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever
from haystack.schema import Document
from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers
Traceback:
02/02/2022 10:36:29 - INFO - faiss.loader - Loading faiss with AVX2 support.
02/02/2022 10:36:29 - INFO - faiss.loader - Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'",)
02/02/2022 10:36:29 - INFO - faiss.loader - Loading faiss.
02/02/2022 10:36:29 - INFO - faiss.loader - Successfully loaded faiss.
02/02/2022 10:36:33 - INFO - farm.modeling.prediction_head - Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex .
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in
1 # Here are the imports we need
----> 2 from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
3 from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever
4 from haystack.schema import Document
5 from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/__init__.py in
3 import pandas as pd
4 from haystack.schema import Document, Label, MultiLabel, BaseComponent
----> 5 from haystack.finder import Finder
6 from haystack.pipeline import Pipeline
7
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/finder.py in
6 from collections import defaultdict
7
----> 8 from haystack.reader.base import BaseReader
9 from haystack.retriever.base import BaseRetriever
10 from haystack import MultiLabel
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/reader/__init__.py in
----> 1 from haystack.reader.farm import FARMReader
2 from haystack.reader.transformers import TransformersReader
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/reader/farm.py in
22
23 from haystack import Document
---> 24 from haystack.document_store.base import BaseDocumentStore
25 from haystack.reader.base import BaseReader
26
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/document_store/__init__.py in
2 from haystack.document_store.faiss import FAISSDocumentStore
3 from haystack.document_store.memory import InMemoryDocumentStore
----> 4 from haystack.document_store.milvus import MilvusDocumentStore
5 from haystack.document_store.sql import SQLDocumentStore
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/document_store/milvus.py in
5 import numpy as np
6
----> 7 from milvus import IndexType, MetricType, Milvus, Status
8 from scipy.special import expit
9 from tqdm import tqdm
ModuleNotFoundError: No module named 'milvus'
pip install milvus
import milvus
Traceback:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in
----> 1 import milvus
ModuleNotFoundError: No module named 'milvus'
ANSWER
Answered 2022-Feb-03 at 09:29I would recommend to downgrade your milvus version to a version before the 2.0 release just a week ago. Here is a discussion on that topic: https://github.com/deepset-ai/haystack/issues/2081
QUESTION
I have pretrained model for object detection (Google Colab + TensorFlow) inside Google Colab and I run it two-three times per week for new images I have and everything was fine for the last year till this week. Now when I try to run model I have this message:
Graph execution error:
2 root error(s) found.
(0) UNIMPLEMENTED: DNN library is not found.
[[{{node functional_1/conv1_conv/Conv2D}}]]
[[StatefulPartitionedCall/SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Reshape_5/_126]]
(1) UNIMPLEMENTED: DNN library is not found.
[[{{node functional_1/conv1_conv/Conv2D}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_restored_function_body_27380] ***
Never happended before.
Before I can run my model I have to install Tensor Flow object detection API with this command:
import os
os.chdir('/project/models/research')
!protoc object_detection/protos/*.proto --python_out=.
!cp object_detection/packages/tf2/setup.py .
!python -m pip install .
This is the output of command:
Processing /content/gdrive/MyDrive/models/research
DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.
Collecting avro-python3
Downloading avro-python3-1.10.2.tar.gz (38 kB)
Collecting apache-beam
Downloading apache_beam-2.35.0-cp37-cp37m-manylinux2010_x86_64.whl (9.9 MB)
|████████████████████████████████| 9.9 MB 1.6 MB/s
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (7.1.2)
Requirement already satisfied: lxml in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (4.2.6)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (3.2.2)
Requirement already satisfied: Cython in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (0.29.27)
Requirement already satisfied: contextlib2 in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (0.5.5)
Collecting tf-slim
Downloading tf_slim-1.1.0-py2.py3-none-any.whl (352 kB)
|████████████████████████████████| 352 kB 50.5 MB/s
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.15.0)
Requirement already satisfied: pycocotools in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (2.0.4)
Collecting lvis
Downloading lvis-0.5.3-py3-none-any.whl (14 kB)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.4.1)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.3.5)
Collecting tf-models-official>=2.5.1
Downloading tf_models_official-2.8.0-py2.py3-none-any.whl (2.2 MB)
|████████████████████████████████| 2.2 MB 38.3 MB/s
Collecting tensorflow_io
Downloading tensorflow_io-0.24.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (23.4 MB)
|████████████████████████████████| 23.4 MB 1.7 MB/s
Requirement already satisfied: keras in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (2.7.0)
Collecting opencv-python-headless
Downloading opencv_python_headless-4.5.5.62-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (47.7 MB)
|████████████████████████████████| 47.7 MB 74 kB/s
Collecting sacrebleu
Downloading sacrebleu-2.0.0-py3-none-any.whl (90 kB)
|████████████████████████████████| 90 kB 10.4 MB/s
Requirement already satisfied: kaggle>=1.3.9 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.5.12)
Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (5.4.8)
Requirement already satisfied: oauth2client in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (4.1.3)
Collecting tensorflow-addons
Downloading tensorflow_addons-0.15.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)
|████████████████████████████████| 1.1 MB 37.8 MB/s
Requirement already satisfied: gin-config in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (0.5.0)
Requirement already satisfied: tensorflow-datasets in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (4.0.1)
Collecting sentencepiece
Downloading sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)
|████████████████████████████████| 1.2 MB 37.5 MB/s
Collecting tensorflow-model-optimization>=0.4.1
Downloading tensorflow_model_optimization-0.7.0-py2.py3-none-any.whl (213 kB)
|████████████████████████████████| 213 kB 42.7 MB/s
Collecting pyyaml<6.0,>=5.1
Downloading PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636 kB)
|████████████████████████████████| 636 kB 53.3 MB/s
Collecting tensorflow-text~=2.8.0
Downloading tensorflow_text-2.8.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (4.9 MB)
|████████████████████████████████| 4.9 MB 46.1 MB/s
Requirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.12.10)
Requirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.19.5)
Requirement already satisfied: tensorflow-hub>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (0.12.0)
Collecting seqeval
Downloading seqeval-1.2.2.tar.gz (43 kB)
|████████████████████████████████| 43 kB 2.1 MB/s
Collecting tensorflow~=2.8.0
Downloading tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl (497.5 MB)
|████████████████████████████████| 497.5 MB 28 kB/s
Collecting py-cpuinfo>=3.3.0
Downloading py-cpuinfo-8.0.0.tar.gz (99 kB)
|████████████████████████████████| 99 kB 10.1 MB/s
Requirement already satisfied: google-auth<3dev,>=1.16.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.35.0)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.1)
Requirement already satisfied: httplib2<1dev,>=0.15.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.17.4)
Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.0.4)
Requirement already satisfied: google-api-core<3dev,>=1.21.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.26.3)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (57.4.0)
Requirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2018.9)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.54.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.23.0)
Requirement already satisfied: packaging>=14.3 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (21.3)
Requirement already satisfied: protobuf>=3.12.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.17.3)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (4.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (4.2.4)
Requirement already satisfied: certifi in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (2021.10.8)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (1.24.3)
Requirement already satisfied: python-dateutil in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (2.8.2)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (4.62.3)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (5.0.2)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=14.3->google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.7)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.4.8)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.4)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.1.0)
Requirement already satisfied: libclang>=9.0.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (13.0.0)
Requirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.1.0)
Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.6.3)
Requirement already satisfied: gast>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.4.0)
Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.2.0)
Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.10.0.2)
Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.13.3)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.23.1)
Collecting tf-estimator-nightly==2.8.0.dev2021122109
Downloading tf_estimator_nightly-2.8.0.dev2021122109-py2.py3-none-any.whl (462 kB)
|████████████████████████████████| 462 kB 49.5 MB/s
Requirement already satisfied: keras-preprocessing>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.1.2)
Collecting tensorboard<2.9,>=2.8
Downloading tensorboard-2.8.0-py3-none-any.whl (5.8 MB)
|████████████████████████████████| 5.8 MB 41.2 MB/s
Requirement already satisfied: flatbuffers>=1.12 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (2.0)
Collecting keras
Downloading keras-2.8.0-py2.py3-none-any.whl (1.4 MB)
|████████████████████████████████| 1.4 MB 41.2 MB/s
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.3.0)
Collecting numpy>=1.15.4
Downloading numpy-1.21.5-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)
|████████████████████████████████| 15.7 MB 41.4 MB/s
Requirement already satisfied: absl-py>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.0.0)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.43.0)
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.7/dist-packages (from astunparse>=1.6.0->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.37.1)
Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py>=2.9.0->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.5.2)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.6.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.0.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.4.6)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.8.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.3.6)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.3.1)
Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (4.10.1)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.7.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.2.0)
Requirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-model-optimization>=0.4.1->tf-models-official>=2.5.1->object-detection==0.1) (0.1.6)
Requirement already satisfied: crcmod<2.0,>=1.7 in /usr/local/lib/python3.7/dist-packages (from apache-beam->object-detection==0.1) (1.7)
Collecting fastavro<2,>=0.21.4
Downloading fastavro-1.4.9-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB)
|████████████████████████████████| 2.3 MB 38.1 MB/s
Requirement already satisfied: pyarrow<7.0.0,>=0.15.1 in /usr/local/lib/python3.7/dist-packages (from apache-beam->object-detection==0.1) (6.0.1)
Requirement already satisfied: pydot<2,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from apache-beam->object-detection==0.1) (1.3.0)
Collecting proto-plus<2,>=1.7.1
Downloading proto_plus-1.19.9-py3-none-any.whl (45 kB)
|████████████████████████████████| 45 kB 3.2 MB/s
Collecting requests<3.0.0dev,>=2.18.0
Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB)
|████████████████████████████████| 63 kB 1.8 MB/s
Collecting dill<0.3.2,>=0.3.1.1
Downloading dill-0.3.1.1.tar.gz (151 kB)
|████████████████████████████████| 151 kB 44.4 MB/s
Collecting numpy>=1.15.4
Downloading numpy-1.20.3-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.3 MB)
|████████████████████████████████| 15.3 MB 21.1 MB/s
Collecting orjson<4.0
Downloading orjson-3.6.6-cp37-cp37m-manylinux_2_24_x86_64.whl (245 kB)
|████████████████████████████████| 245 kB 53.2 MB/s
Collecting hdfs<3.0.0,>=2.1.0
Downloading hdfs-2.6.0-py3-none-any.whl (33 kB)
Collecting pymongo<4.0.0,>=3.8.0
Downloading pymongo-3.12.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (508 kB)
|████████████████████████████████| 508 kB 44.3 MB/s
Requirement already satisfied: docopt in /usr/local/lib/python3.7/dist-packages (from hdfs<3.0.0,>=2.1.0->apache-beam->object-detection==0.1) (0.6.2)
Collecting protobuf>=3.12.0
Downloading protobuf-3.19.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB)
|████████████████████████████████| 1.1 MB 47.3 MB/s
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.0.11)
Requirement already satisfied: opencv-python>=4.1.0.25 in /usr/local/lib/python3.7/dist-packages (from lvis->object-detection==0.1) (4.1.2.30)
Requirement already satisfied: cycler>=0.10.0 in /usr/local/lib/python3.7/dist-packages (from lvis->object-detection==0.1) (0.11.0)
Requirement already satisfied: kiwisolver>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from lvis->object-detection==0.1) (1.3.2)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.7/dist-packages (from python-slugify->kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (1.3)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from sacrebleu->tf-models-official>=2.5.1->object-detection==0.1) (2019.12.20)
Requirement already satisfied: tabulate>=0.8.9 in /usr/local/lib/python3.7/dist-packages (from sacrebleu->tf-models-official>=2.5.1->object-detection==0.1) (0.8.9)
Collecting portalocker
Downloading portalocker-2.3.2-py2.py3-none-any.whl (15 kB)
Collecting colorama
Downloading colorama-0.4.4-py2.py3-none-any.whl (16 kB)
Requirement already satisfied: scikit-learn>=0.21.3 in /usr/local/lib/python3.7/dist-packages (from seqeval->tf-models-official>=2.5.1->object-detection==0.1) (1.0.2)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.3->seqeval->tf-models-official>=2.5.1->object-detection==0.1) (1.1.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.3->seqeval->tf-models-official>=2.5.1->object-detection==0.1) (3.1.0)
Requirement already satisfied: typeguard>=2.7 in /usr/local/lib/python3.7/dist-packages (from tensorflow-addons->tf-models-official>=2.5.1->object-detection==0.1) (2.7.1)
Requirement already satisfied: promise in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (2.3)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (0.16.0)
Requirement already satisfied: attrs>=18.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (21.4.0)
Requirement already satisfied: importlib-resources in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (5.4.0)
Requirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (1.6.0)
Collecting tensorflow-io-gcs-filesystem>=0.23.1
Downloading tensorflow_io_gcs_filesystem-0.24.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (2.1 MB)
|████████████████████████████████| 2.1 MB 40.9 MB/s
Building wheels for collected packages: object-detection, py-cpuinfo, dill, avro-python3, seqeval
Building wheel for object-detection (setup.py) ... done
Created wheel for object-detection: filename=object_detection-0.1-py3-none-any.whl size=1686316 sha256=775b8c34c800b3b3139d1067abd686af9ce9158011fccfb5450ccfd9bf424a5a
Stored in directory: /tmp/pip-ephem-wheel-cache-rmw0fvil/wheels/d0/e3/e9/b9ffe85019ec441e90d8ff9eddee9950c4c23b7598204390b9
Building wheel for py-cpuinfo (setup.py) ... done
Created wheel for py-cpuinfo: filename=py_cpuinfo-8.0.0-py3-none-any.whl size=22257 sha256=ac956c4c039868fdba78645bea056754e667e8840bea783ad2ca75e4d3e682c6
Stored in directory: /root/.cache/pip/wheels/d2/f1/1f/041add21dc9c4220157f1bd2bd6afe1f1a49524c3396b94401
Building wheel for dill (setup.py) ... done
Created wheel for dill: filename=dill-0.3.1.1-py3-none-any.whl size=78544 sha256=d9c6cdfd69aea2b4d78e6afbbe2bc530394e4081eb186eb4f4cd02373ca739fd
Stored in directory: /root/.cache/pip/wheels/a4/61/fd/c57e374e580aa78a45ed78d5859b3a44436af17e22ca53284f
Building wheel for avro-python3 (setup.py) ... done
Created wheel for avro-python3: filename=avro_python3-1.10.2-py3-none-any.whl size=44010 sha256=4eca8b4f30e4850d5dabccee36c40c8dda8a6c7e7058cfb7f0258eea5ce7b2b3
Stored in directory: /root/.cache/pip/wheels/d6/e5/b1/6b151d9b535ee50aaa6ab27d145a0104b6df02e5636f0376da
Building wheel for seqeval (setup.py) ... done
Created wheel for seqeval: filename=seqeval-1.2.2-py3-none-any.whl size=16180 sha256=0ddfa46d0e36e9be346a90833ef11cc0d38cc7e744be34c5a0d321f997a30cba
Stored in directory: /root/.cache/pip/wheels/05/96/ee/7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7
Successfully built object-detection py-cpuinfo dill avro-python3 seqeval
Installing collected packages: requests, protobuf, numpy, tf-estimator-nightly, tensorflow-io-gcs-filesystem, tensorboard, keras, tensorflow, portalocker, dill, colorama, tf-slim, tensorflow-text, tensorflow-model-optimization, tensorflow-addons, seqeval, sentencepiece, sacrebleu, pyyaml, pymongo, py-cpuinfo, proto-plus, orjson, opencv-python-headless, hdfs, fastavro, tf-models-official, tensorflow-io, lvis, avro-python3, apache-beam, object-detection
Attempting uninstall: requests
Found existing installation: requests 2.23.0
Uninstalling requests-2.23.0:
Successfully uninstalled requests-2.23.0
Attempting uninstall: protobuf
Found existing installation: protobuf 3.17.3
Uninstalling protobuf-3.17.3:
Successfully uninstalled protobuf-3.17.3
Attempting uninstall: numpy
Found existing installation: numpy 1.19.5
Uninstalling numpy-1.19.5:
Successfully uninstalled numpy-1.19.5
Attempting uninstall: tensorflow-io-gcs-filesystem
Found existing installation: tensorflow-io-gcs-filesystem 0.23.1
Uninstalling tensorflow-io-gcs-filesystem-0.23.1:
Successfully uninstalled tensorflow-io-gcs-filesystem-0.23.1
Attempting uninstall: tensorboard
Found existing installation: tensorboard 2.7.0
Uninstalling tensorboard-2.7.0:
Successfully uninstalled tensorboard-2.7.0
Attempting uninstall: keras
Found existing installation: keras 2.7.0
Uninstalling keras-2.7.0:
Successfully uninstalled keras-2.7.0
Attempting uninstall: tensorflow
Found existing installation: tensorflow 2.7.0
Uninstalling tensorflow-2.7.0:
Successfully uninstalled tensorflow-2.7.0
Attempting uninstall: dill
Found existing installation: dill 0.3.4
Uninstalling dill-0.3.4:
Successfully uninstalled dill-0.3.4
Attempting uninstall: pyyaml
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Attempting uninstall: pymongo
Found existing installation: pymongo 4.0.1
Uninstalling pymongo-4.0.1:
Successfully uninstalled pymongo-4.0.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
yellowbrick 1.3.post1 requires numpy<1.20,>=1.16.0, but you have numpy 1.20.3 which is incompatible.
multiprocess 0.70.12.2 requires dill>=0.3.4, but you have dill 0.3.1.1 which is incompatible.
google-colab 1.0.0 requires requests~=2.23.0, but you have requests 2.27.1 which is incompatible.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.
Successfully installed apache-beam-2.35.0 avro-python3-1.10.2 colorama-0.4.4 dill-0.3.1.1 fastavro-1.4.9 hdfs-2.6.0 keras-2.8.0 lvis-0.5.3 numpy-1.20.3 object-detection-0.1 opencv-python-headless-4.5.5.62 orjson-3.6.6 portalocker-2.3.2 proto-plus-1.19.9 protobuf-3.19.4 py-cpuinfo-8.0.0 pymongo-3.12.3 pyyaml-5.4.1 requests-2.27.1 sacrebleu-2.0.0 sentencepiece-0.1.96 seqeval-1.2.2 tensorboard-2.8.0 tensorflow-2.8.0 tensorflow-addons-0.15.0 tensorflow-io-0.24.0 tensorflow-io-gcs-filesystem-0.24.0 tensorflow-model-optimization-0.7.0 tensorflow-text-2.8.1 tf-estimator-nightly-2.8.0.dev2021122109 tf-models-official-2.8.0 tf-slim-1.1.0
I am noticing that this command uninstalling tensorflow 2.7 and installing tensorflow 2.8. I am not sure it was happening before. Maybe it's the reason DNN library link is missing o something?
I can confirm these:
- Nothing was changed inside pretrained model or already installed model or object_detection source files I downloaded a year ago.
- I tried to run command !pip install dnn - not working
- I tried to restart runtime (without disconnecting) - not working
Somebody can help? Thanks.
ANSWER
Answered 2022-Feb-07 at 09:19It happened the same to me last friday. I think it has something to do with Cuda instalation in Google Colab but I don't know exactly the reason
QUESTION
I am trying to install conda on EMR and below is my bootstrap script, it looks like conda is getting installed but it is not getting added to environment variable. When I manually update the $PATH
variable on EMR master node, it can identify conda
. I want to use conda on Zeppelin.
I also tried adding condig into configuration like below while launching my EMR instance however I still get the below mentioned error.
"classification": "spark-env",
"properties": {
"conda": "/home/hadoop/conda/bin"
}
[hadoop@ip-172-30-5-150 ~]$ PATH=/home/hadoop/conda/bin:$PATH
[hadoop@ip-172-30-5-150 ~]$ conda
usage: conda [-h] [-V] command ...
conda is a tool for managing and deploying applications, environments and packages.
#!/usr/bin/env bash
# Install conda
wget https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh -O /home/hadoop/miniconda.sh \
&& /bin/bash ~/miniconda.sh -b -p $HOME/conda
conda config --set always_yes yes --set changeps1 no
conda install conda=4.2.13
conda config -f --add channels conda-forge
rm ~/miniconda.sh
echo bootstrap_conda.sh completed. PATH now: $PATH
export PYSPARK_PYTHON="/home/hadoop/conda/bin/python3.5"
echo -e '\nexport PATH=$HOME/conda/bin:$PATH' >> $HOME/.bashrc && source $HOME/.bashrc
conda create -n zoo python=3.7 # "zoo" is conda environment name, you can use any name you like.
conda activate zoo
sudo pip3 install tensorflow
sudo pip3 install boto3
sudo pip3 install botocore
sudo pip3 install numpy
sudo pip3 install pandas
sudo pip3 install scipy
sudo pip3 install s3fs
sudo pip3 install matplotlib
sudo pip3 install -U tqdm
sudo pip3 install -U scikit-learn
sudo pip3 install -U scikit-multilearn
sudo pip3 install xlutils
sudo pip3 install natsort
sudo pip3 install pydot
sudo pip3 install python-pydot
sudo pip3 install python-pydot-ng
sudo pip3 install pydotplus
sudo pip3 install h5py
sudo pip3 install graphviz
sudo pip3 install recmetrics
sudo pip3 install openpyxl
sudo pip3 install xlrd
sudo pip3 install xlwt
sudo pip3 install tensorflow.io
sudo pip3 install Cython
sudo pip3 install ray
sudo pip3 install zoo
sudo pip3 install analytics-zoo
sudo pip3 install analytics-zoo[ray]
#sudo /usr/bin/pip-3.6 install -U imbalanced-learn
ANSWER
Answered 2022-Feb-05 at 00:17I got the conda working by modifying the script as below, emr python versions were colliding with the conda version.:
wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.9.2-Linux-x86_64.sh -O /home/hadoop/miniconda.sh \
&& /bin/bash ~/miniconda.sh -b -p $HOME/conda
echo -e '\n export PATH=$HOME/conda/bin:$PATH' >> $HOME/.bashrc && source $HOME/.bashrc
conda config --set always_yes yes --set changeps1 no
conda config -f --add channels conda-forge
conda create -n zoo python=3.7 # "zoo" is conda environment name
conda init bash
source activate zoo
conda install python 3.7.0 -c conda-forge orca
sudo /home/hadoop/conda/envs/zoo/bin/python3.7 -m pip install virtualenv
and setting zeppelin python and pyspark parameters to:
“spark.pyspark.python": "/home/hadoop/conda/envs/zoo/bin/python3",
"spark.pyspark.virtualenv.enabled": "true",
"spark.pyspark.virtualenv.type":"native",
"spark.pyspark.virtualenv.bin.path":"/home/hadoop/conda/envs/zoo/bin/,
"zeppelin.pyspark.python" : "/home/hadoop/conda/bin/python",
"zeppelin.python": "/home/hadoop/conda/bin/python"
Orca only support TF upto 1.5 hence it was not working as I am using TF2.
QUESTION
Error while installing manimce, I have been trying to install manimce library on windows subsystem for linux and after running
pip install manimce
Collecting manimce
Downloading manimce-0.1.1.post2-py3-none-any.whl (249 kB)
|████████████████████████████████| 249 kB 257 kB/s
Collecting Pillow
Using cached Pillow-8.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB)
Collecting scipy
Using cached scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB)
Collecting colour
Using cached colour-0.1.5-py2.py3-none-any.whl (23 kB)
Collecting pangocairocffi<0.5.0,>=0.4.0
Downloading pangocairocffi-0.4.0.tar.gz (17 kB)
Preparing metadata (setup.py) ... done
Collecting numpy
Using cached numpy-1.21.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)
Collecting pydub
Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Collecting pygments
Using cached Pygments-2.10.0-py3-none-any.whl (1.0 MB)
Collecting cairocffi<2.0.0,>=1.1.0
Downloading cairocffi-1.3.0.tar.gz (88 kB)
|████████████████████████████████| 88 kB 160 kB/s
Preparing metadata (setup.py) ... done
Collecting tqdm
Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB)
Collecting pangocffi<0.9.0,>=0.8.0
Downloading pangocffi-0.8.0.tar.gz (33 kB)
Preparing metadata (setup.py) ... done
Collecting pycairo<2.0,>=1.19
Using cached pycairo-1.20.1.tar.gz (344 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting progressbar
Downloading progressbar-2.5.tar.gz (10 kB)
Preparing metadata (setup.py) ... done
Collecting rich<7.0,>=6.0
Using cached rich-6.2.0-py3-none-any.whl (150 kB)
Collecting cffi>=1.1.0
Using cached cffi-1.15.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (446 kB)
Collecting commonmark<0.10.0,>=0.9.0
Using cached commonmark-0.9.1-py2.py3-none-any.whl (51 kB)
Collecting typing-extensions<4.0.0,>=3.7.4
Using cached typing_extensions-3.10.0.2-py3-none-any.whl (26 kB)
Collecting colorama<0.5.0,>=0.4.0
Using cached colorama-0.4.4-py2.py3-none-any.whl (16 kB)
Collecting pycparser
Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Building wheels for collected packages: cairocffi, pangocairocffi, pangocffi, pycairo, progressbar
Building wheel for cairocffi (setup.py) ... done
Created wheel for cairocffi: filename=cairocffi-1.3.0-py3-none-any.whl size=89650 sha256=afc73218cc9fa1d844d7165f598e2be0428598166b4c3ed9de5bbdc94a0a6977
Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/f3/97/83/8022b9237866102e18d1b7ac0a269769e6fccba0f63dceb9b7
Building wheel for pangocairocffi (setup.py) ... done
Created wheel for pangocairocffi: filename=pangocairocffi-0.4.0-py3-none-any.whl size=19283 sha256=54399796259c6e24f9ab56c5747ab273dcf97fb6fed3e7b54935f9ac49351d50
Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/60/58/92/507a12a5044f7fcda6f4dfd8e0a607cc1fe957bc0dea885906
Building wheel for pangocffi (setup.py) ... done
Created wheel for pangocffi: filename=pangocffi-0.8.0-py3-none-any.whl size=37899 sha256=bea348af93696816b046dd901aa60d29a464460c5faac67628eb7e1ea7d1807d
Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/c4/df/6d/e9d0f79b1545f6e902cc22773b1429de7a5efc240b891ee009
Building wheel for pycairo (pyproject.toml) ... error
ERROR: Command errored out with exit status 1:
command: /home/yusifer_zendric/manim_ce/venv/bin/python /home/yusifer_zendric/manim_ce/venv/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmpuguwzu3u
cwd: /tmp/pip-install-l4hqdegr/pycairo_f4d80b8f3e4840a3802342825adcdff5
Complete output (12 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/cairo
copying cairo/__init__.py -> build/lib.linux-x86_64-3.8/cairo
copying cairo/__init__.pyi -> build/lib.linux-x86_64-3.8/cairo
copying cairo/py.typed -> build/lib.linux-x86_64-3.8/cairo
running build_ext
'pkg-config' not found.
Command ['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10']
----------------------------------------
ERROR: Failed building wheel for pycairo
Building wheel for progressbar (setup.py) ... done
Created wheel for progressbar: filename=progressbar-2.5-py3-none-any.whl size=12074 sha256=7290ef8de5dd955bf756b90130f400dd19c2cc9ea050a5a1dce2803440f581e2
Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/2c/67/ed/d84123843c937d7e7f5ba88a270d11036473144143355e2747
Successfully built cairocffi pangocairocffi pangocffi progressbar
Failed to build pycairo
ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects
(venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$
(venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ pip install manim_ce
ERROR: Could not find a version that satisfies the requirement manim_ce (from versions: none)
ERROR: No matching distribution found for manim_ce
(venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ manim example_scenes/basic.py -pql
Command 'manim' not found, did you mean:
command 'maim' from deb maim (5.5.3-1build1)
Try: sudo apt install
(venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ sudo apt-get install manim
[sudo] password for yusifer_zendric:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package manim
(venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ pip3 install manimlib
Collecting manimlib
Downloading manimlib-0.2.0.tar.gz (4.8 MB)
|████████████████████████████████| 4.8 MB 498 kB/s
Preparing metadata (setup.py) ... done
Collecting Pillow
Using cached Pillow-8.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB)
Collecting argparse
Downloading argparse-1.4.0-py2.py3-none-any.whl (23 kB)
Collecting colour
Using cached colour-0.1.5-py2.py3-none-any.whl (23 kB)
Collecting numpy
Using cached numpy-1.21.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)
Collecting opencv-python
Downloading opencv_python-4.5.4.60-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.3 MB)
|████████████████████████████████| 60.3 MB 520 kB/s
Collecting progressbar
Using cached progressbar-2.5-py3-none-any.whl
Collecting pycairo
Using cached pycairo-1.20.1.tar.gz (344 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting pydub
Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Collecting pygments
Using cached Pygments-2.10.0-py3-none-any.whl (1.0 MB)
Collecting scipy
Using cached scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB)
Collecting tqdm
Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB)
Building wheels for collected packages: manimlib, pycairo
Building wheel for manimlib (setup.py) ... done
Created wheel for manimlib: filename=manimlib-0.2.0-py3-none-any.whl size=212737 sha256=27efe2c226d80cfe5663928e980d3e5f5a164d8e9d0aacea5014d37ffdedb76a
Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/87/36/c1/2db5ed5de9908034108f3c39538cd3367445d9cec01e7c8c23
Building wheel for pycairo (pyproject.toml) ... error
ERROR: Command errored out with exit status 1:
command: /home/yusifer_zendric/manim_ce/venv/bin/python /home/yusifer_zendric/manim_ce/venv/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmp5o2970su
cwd: /tmp/pip-install-sxxp3lw2/pycairo_d372a62d0c6b4c4484391402d21485e1
Complete output (12 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/cairo
copying cairo/__init__.py -> build/lib.linux-x86_64-3.8/cairo
copying cairo/__init__.pyi -> build/lib.linux-x86_64-3.8/cairo
copying cairo/py.typed -> build/lib.linux-x86_64-3.8/cairo
running build_ext
'pkg-config' not found.
Command ['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10']
----------------------------------------
ERROR: Failed building wheel for pycairo
Successfully built manimlib
Failed to build pycairo
ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects
all the libraries are installed accept the pycairo library. It's just showing this to install pyproject.toml error. Infact I have already done pip install pyproject.toml and it is installed then also it's showing the same error.
ANSWER
Answered 2022-Jan-28 at 02:24apt-get install sox ffmpeg libcairo2 libcairo2-dev
apt-get install texlive-full
pip3 install manimlib # or pip install manimlib
Then:
pip3 install manimce # or pip install manimce
And everything works.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install tqdm
You can use tqdm like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page