Explore all Debian open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Debian

electron-builder

v23.0.6

baseimage-docker

focal-1.0.0

termux-packages

Bootstrap archives for Termux application

docker-gitlab

14.9.2

AppImageKit

Continuous build

Popular Libraries in Debian

streisand

by StreisandEffect doticonshelldoticon

star image 22448 doticonNOASSERTION

Streisand sets up a new server running your choice of WireGuard, OpenConnect, OpenSSH, OpenVPN, Shadowsocks, sslh, Stunnel, or a Tor bridge. It also generates custom instructions for all of these services. At the end of the run you are given an HTML file with instructions that can be shared with friends, family members, and fellow activists.

openvpn-install

by Nyr doticonshelldoticon

star image 13652 doticonMIT

OpenVPN road warrior installer for Ubuntu, Debian, CentOS and Fedora

electron-builder

by electron-userland doticontypescriptdoticon

star image 11950 doticonMIT

A complete solution to package and build a ready for distribution Electron app with “auto update” support out of the box

baseimage-docker

by phusion doticonshelldoticon

star image 8208 doticonMIT

A minimal Ubuntu base image modified for Docker-friendliness

ui

by andlabs doticongodoticon

star image 7879 doticonNOASSERTION

Platform-native GUI library for Go.

termux-packages

by termux doticonshelldoticon

star image 7867 doticonNOASSERTION

A build system and primary set of packages for Termux.

openvpn-install

by angristan doticonshelldoticon

star image 7820 doticonMIT

Set up your own OpenVPN server on Debian, Ubuntu, Fedora, CentOS or Arch Linux.

crouton

by dnschneid doticonshelldoticon

star image 7819 doticonNOASSERTION

Chromium OS Universal Chroot Environment

shadowsocks_install

by teddysun doticonshelldoticon

star image 7483 doticon

Auto Install Shadowsocks Server for CentOS/Debian/Ubuntu

Trending New libraries in Debian

fhs-install-v2ray

by v2fly doticonshelldoticon

star image 2424 doticonGPL-3.0

Bash script for installing V2Ray in operating systems such as Debian / CentOS / Fedora / openSUSE that support systemd

rofi

by adi1090x doticonshelldoticon

star image 2381 doticonGPL-3.0

A large collection of Rofi based custom Menu, Applets, Launchers & Powermenus.

pyston

by pyston doticonpythondoticon

star image 1915 doticonNOASSERTION

A faster and highly-compatible implementation of the Python programming language.

Lists

by blocklistproject doticonjavascriptdoticon

star image 1266 doticonUnlicense

Primary Block Lists

whoami

by omer-dogan doticonshelldoticon

star image 873 doticonGPL-3.0

Whoami is a privacy tool developed to keep you anonymous on Debian-based linux operating systems at the highest level.

wireguard-install

by Nyr doticonshelldoticon

star image 863 doticonMIT

WireGuard road warrior installer for Ubuntu, Debian, CentOS and Fedora

ubuntu-wsl2-systemd-script

by DamionGans doticonshelldoticon

star image 860 doticon

Script to enable systemd support on current Ubuntu WSL2 images [Unsupported, no longer updated]

pi-apps

by Botspot doticonshelldoticon

star image 800 doticonGPL-3.0

Raspberry Pi App Store for Open Source Projects

plymouth-themes

by adi1090x doticonshelldoticon

star image 715 doticonGPL-3.0

A hugh collection (80+) of plymouth themes ported from android bootanimations

Top Authors in Debian

1

openstack-archive

67 Libraries

star icon518

2

Oefenweb

25 Libraries

star icon460

3

idealista

15 Libraries

star icon158

4

geerlingguy

13 Libraries

star icon2062

5

sclorg

11 Libraries

star icon726

6

angristan

9 Libraries

star icon8445

7

mrlesmithjr

9 Libraries

star icon269

8

Whonix

9 Libraries

star icon658

9

while-true-do

9 Libraries

star icon22

10

alvistack

8 Libraries

star icon46

1

67 Libraries

star icon518

2

25 Libraries

star icon460

3

15 Libraries

star icon158

4

13 Libraries

star icon2062

5

11 Libraries

star icon726

6

9 Libraries

star icon8445

7

9 Libraries

star icon269

8

9 Libraries

star icon658

9

9 Libraries

star icon22

10

8 Libraries

star icon46

Trending Kits in Debian

No Trending Kits are available at this moment for Debian

Trending Discussions on Debian

Dolphin KDE | Sidebar config-file location for backup and export

Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead

Why does numpy.view(bool) makes numpy.logical_and significantly faster?

How do I capture default arguments with `|c`?

Cypress Test Runner unexpectedly exited via a exit event with signal SIGSEGV in circleCI

Unable to install Jenkins on Ubuntu 20.04

Python 3.9.8 fails using Black and importing `typed_ast.ast3`

Sbt-native-packager cannot connect to Docker daemon

Different behaviours when initializing differently

Why can't add file handler with the form of self.fh in the init method?

QUESTION

Dolphin KDE | Sidebar config-file location for backup and export

Asked 2022-Apr-03 at 09:24

Where can i find the config files of the sidebar from dolphin (file manager), where my Bookmarks for folders and devices are saved.

I want to edit them manually, backup them and export the file to a second user profile on my debian gnu/linux.

I found only the ~/.config/dolphinrc for some global settings but not my bookmarks.

The ~/.local/share/kxmlgui5/dolphin is empty and in ~/.local/share/dolphin/view_properties i found nothing.

ANSWER

Answered 2022-Mar-01 at 11:09

~/.local/share/user-places.xbel

Source https://stackoverflow.com/questions/71307393

QUESTION

Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead

Asked 2022-Mar-07 at 16:16

I was installing elasticsearch following this guide, but elasticsearch is not really the part of this question.

In the first step, I need to add the key:

1wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
2

and got the following message:

1wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
2Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
3

The installation process was fine, but since it's deprecated, I'm looking for the new usage that replaces apt-key. (I have no problem installing the package.) From man apt-key I saw

apt-key(8) will last be available in Debian 11 and Ubuntu 22.04.

...

Binary keyring files intended to be used with any apt version should therefore always be created with gpg --export.

but it didn't say the alternative to apt-key add. I tried

1wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
2Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
3wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --export
4

but didn't work. So what do I use after the pipe of wget when apt-key is removed?

ANSWER

Answered 2021-Nov-03 at 07:31

answer found here : https://suay.site/?p=526

in short :

retrieve the key and add the key :

1wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
2Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
3wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --export
4curl -s URL | sudo gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/NAME.gpg --import
5

authorize the user _apt :

1wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
2Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
3wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --export
4curl -s URL | sudo gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/NAME.gpg --import
5sudo chown _apt /etc/apt/trusted.gpg.d/NAME.gpg
6

Source https://stackoverflow.com/questions/68992799

QUESTION

Why does numpy.view(bool) makes numpy.logical_and significantly faster?

Asked 2022-Feb-22 at 20:23

When passing a numpy.ndarray of uint8 to numpy.logical_and, it runs significantly faster if I apply numpy.view(bool) to its inputs.

1a = np.random.randint(0, 255, 1000 * 1000 * 100, dtype=np.uint8)
2b = np.random.randint(0, 255, 1000 * 1000 * 100, dtype=np.uint8)
3
4%timeit np.logical_and(a, b)
5126 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
6
7%timeit np.logical_and(a.view(bool), b.view(bool))
820.9 ms ± 110 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
9

Can someone explain why this is happening?

Furthermore, why numpy.logical_and doesn't automatically apply view(bool) to an array of uint8? (Is there any situation where we shouldn't use view(bool)?)

EDIT:

It seems that this is an issue with Windows environment. I just tried the same thing in the official python docker container (which is debian) and found no difference between them.

My environment:

  • OS: Windows 10 Pro 21H2
  • CPU: AMD Ryzen 9 5900X
  • Python: Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] on win32
  • numpy: 1.22.2

ANSWER

Answered 2022-Feb-22 at 20:23

This is a performance issue of the current Numpy implementation. I can also reproduce this problem on Windows (using an Intel Skylake Xeon processor with Numpy 1.20.3). np.logical_and(a, b) executes a very-inefficient scalar assembly code based on slow conditional jumps while np.logical_and(a.view(bool), b.view(bool)) executes relatively-fast SIMD instructions.

Currently, Numpy uses a specific implementation for bool-types. Regarding the compiler used, the general-purpose implementation can be significantly slower if the compiler used to build Numpy failed to automatically vectorize the code which is apparently the case on Windows (and explain why this is not the case on other platforms since the compiler is likely not exactly the same). The Numpy code can be improved for non-bool types. Note that the vectorization of Numpy is an ongoing work and we plan optimize this soon.


Deeper analysis

Here is the assembly code executed by np.logical_and(a, b):

1a = np.random.randint(0, 255, 1000 * 1000 * 100, dtype=np.uint8)
2b = np.random.randint(0, 255, 1000 * 1000 * 100, dtype=np.uint8)
3
4%timeit np.logical_and(a, b)
5126 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
6
7%timeit np.logical_and(a.view(bool), b.view(bool))
820.9 ms ± 110 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
9Block 24:                         
10    cmp byte ptr [r8], 0x0        ; Read a[i]
11    jz <Block 27>                 ; Jump to block 27 if a[i]!=0
12Block 25:                         
13    cmp byte ptr [r9], 0x0        ; Read b[i]
14    jz <Block 27>                 ; Jump to block 27 if b[i]!=0
15Block 26:                         
16    mov al, 0x1                   ; al = 1
17    jmp <Block 28>                ; Skip the next instruction
18Block 27:                         
19    xor al, al                    ; al = 0
20Block 28:                         
21    mov byte ptr [rdx], al        ; result[i] = al
22    inc r8                        ; i += 1
23    inc rdx                       
24    inc r9                        
25    sub rcx, 0x1                  
26    jnz <Block 24>                ; Loop again while i<a.shape[0]
27

As you can see, the loop use several data-dependent conditional jumps to write per item of a and b read. This is very inefficient here since the branch taken cannot be predicted by the processor with random values. As a result the processor stall for few cycles (typically about 10 cycles on modern x86 processors).

Here is the assembly code executed by np.logical_and(a.view(bool), b.view(bool)):

1a = np.random.randint(0, 255, 1000 * 1000 * 100, dtype=np.uint8)
2b = np.random.randint(0, 255, 1000 * 1000 * 100, dtype=np.uint8)
3
4%timeit np.logical_and(a, b)
5126 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
6
7%timeit np.logical_and(a.view(bool), b.view(bool))
820.9 ms ± 110 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
9Block 24:                         
10    cmp byte ptr [r8], 0x0        ; Read a[i]
11    jz <Block 27>                 ; Jump to block 27 if a[i]!=0
12Block 25:                         
13    cmp byte ptr [r9], 0x0        ; Read b[i]
14    jz <Block 27>                 ; Jump to block 27 if b[i]!=0
15Block 26:                         
16    mov al, 0x1                   ; al = 1
17    jmp <Block 28>                ; Skip the next instruction
18Block 27:                         
19    xor al, al                    ; al = 0
20Block 28:                         
21    mov byte ptr [rdx], al        ; result[i] = al
22    inc r8                        ; i += 1
23    inc rdx                       
24    inc r9                        
25    sub rcx, 0x1                  
26    jnz <Block 24>                ; Loop again while i<a.shape[0]
27Block 15:
28    movdqu xmm1, xmmword ptr [r10]               ; xmm1 = a[i:i+16]
29    movdqu xmm0, xmmword ptr [rbx+r10*1]         ; xmm0 = b[i:i+16]
30    lea r10, ptr [r10+0x10]                      ; i += 16
31    pcmpeqb xmm1, xmm2                           ; \
32    pandn xmm1, xmm0                             ;  | Complex sequence to just do:
33    pcmpeqb xmm1, xmm2                           ;  | xmm1 &= xmm0
34    pandn xmm1, xmm3                             ; /
35    movdqu xmmword ptr [r14+r10*1-0x10], xmm1    ; result[i:i+16] = xmm1
36    sub rcx, 0x1                                 
37    jnz <Block 15>                               ; Loop again while i!=a.shape[0]//16
38

This code use the SIMD instruction set called SSE which is able to work on 128-bit wide registers. There is no conditional jumps. This code is far more efficient as it operates on 16 items at once per iteration and each iteration should be much faster.

Note that this last code is not optimal either as most modern x86 processors (like your AMD one) supports the 256-bit AVX-2 instruction set (twice as fast). Moreover, the compiler generate an inefficient sequence of SIMD instruction to perform the logical-and that can be optimized. The compiler seems to assume the boolean can be values different of 0 or 1. That being said, the input arrays are too big to fit in your CPU cache and so the code is bounded by the throughput of your RAM as opposed to the first one. This is why the SIMD-friendly code is not drastically faster. The difference between the two version is certainly much bigger with arrays of less than 1 MiB on your processor (like on almost all other modern processor).

Source https://stackoverflow.com/questions/71225872

QUESTION

How do I capture default arguments with `|c`?

Asked 2022-Jan-11 at 17:04

I've got this function here:

1my @modifiers = <command option>;
2sub triple(|c(Str:D $app1!, Str:D $app2!, Str:D $key! where .chars == 1, Str:D $mod where ($mod ~~ any @modifiers) = 'command' )) {
3    print_template(|c);
4}
5
6sub print_template(*@values) {
7   ...work done here...
8}
9

The problem I'm having is if I call it without the 4th argument, with something like triple 'App1', 'App2', 'p';, the default $mod argument does not get passed on to the print_template argument.

Is there a way to accomplish this?

For full context, this is the toy program here: https://paste.debian.net/1226675/

ANSWER

Answered 2022-Jan-11 at 17:04

OK, based on responses in IRC, this does not appear to be possible. One suggested workaround:

1my @modifiers = <command option>;
2sub triple(|c(Str:D $app1!, Str:D $app2!, Str:D $key! where .chars == 1, Str:D $mod where ($mod ~~ any @modifiers) = 'command' )) {
3    print_template(|c);
4}
5
6sub print_template(*@values) {
7   ...work done here...
8}
9sub triple(|c(Str:D $app1!,
10              Str:D $app2!,
11              Str:D $key! where .chars == 1,
12              Str:D $mod where ($mod ~~ any @modifiers) = 'command' )) {
13    my \d = \(|c[0..2], c[3] // 'command');
14    print_template(|d);
15}
16

Source https://stackoverflow.com/questions/70658741

QUESTION

Cypress Test Runner unexpectedly exited via a exit event with signal SIGSEGV in circleCI

Asked 2021-Dec-10 at 11:43


I am stuck in this problem. I am running cypress tests. When I run locally, it runs smoothly. when I run in circleCI, it throws error after some execution.
Here is what i am getting:

1[334:1020/170552.614728:ERROR:bus.cc(392)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
2[334:1020/170552.616006:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix")
3[334:1020/170552.616185:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix")
4[521:1020/170552.652819:ERROR:gpu_init.cc(441)] Passthrough is not supported, GL is swiftshader
5

Current behavior:
When I run my specs headless on the circleCI, Cypress closed unexpectedly with a socket error.

Error code:

The Test Runner unexpectedly exited via a exit event with signal SIGSEGV

Please search Cypress documentation for possible solutions:

https://on.cypress.io


Platform: linux (Debian - 10.5)
Cypress Version: 8.6.0

ANSWER

Answered 2021-Oct-21 at 08:53

Issue resolved by reverting back cypress version to 7.6.0.

Source https://stackoverflow.com/questions/69658152

QUESTION

Unable to install Jenkins on Ubuntu 20.04

Asked 2021-Dec-08 at 05:56

I am trying to install Jenkins on my Ubuntu EC2 instance and I performed the following steps to install but couldn't install it.

$sudo apt update $sudo apt install openjdk-8-jdk $wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add - $sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' $sudo apt update <--------- (Here I am getting below error)

root@ip-172-31-44-187:~# sudo apt update Ign:1 https://pkg.jenkins.io/debian-stable binary/ InRelease Err:2 https://pkg.jenkins.io/debian-stable binary/ Release Certificate verification failed: The certificate is NOT trusted. The certificate chain uses expired certificate. Could not handshake: Error in the certificate verification. [IP: 151.101.154.133 443] Hit:3 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu focal InRelease Get:4 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB] Get:5 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB] Get:6 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB] Reading package lists... Done E: The repository 'http://pkg.jenkins.io/debian-stable binary/ Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details.

ANSWER

Answered 2021-Oct-09 at 07:17

Yeah , I had same problem with this from yesterday , I think this is after yesterday's new update in jenkins 2.303.2 Lts .

Just do , apt upgrade , apt update, apt get install jenkins -y .

It worked for me .

Source https://stackoverflow.com/questions/69495517

QUESTION

Python 3.9.8 fails using Black and importing `typed_ast.ast3`

Asked 2021-Nov-17 at 08:17

Since updating to python@3.9.8 we get an error while using Black in our CI pipeline.

1black....................................................................Failed
2- hook id: black
3- exit code: 1
4Traceback (most recent call last):
5  File &quot;../.cache/pre-commit/repol9drvp84/py_env-python3/bin/black&quot;, line 5, in &lt;module&gt;
6    from black import patched_main
7  File &quot;../.cache/pre-commit/repol9drvp84/py_env-python3/lib/python3.9/site-packages/black/__init__.py&quot;, line 52, in &lt;module&gt;
8    from typed_ast import ast3, ast27
9  File &quot;../.cache/pre-commit/repol9drvp84/py_env-python3/lib/python3.9/site-packages/typed_ast/ast3.py&quot;, line 40, in &lt;module&gt;
10    from typed_ast import _ast3
11ImportError: ../.cache/pre-commit/repol9drvp84/py_env-python3/lib/python3.9/site-packages/typed_ast/_ast3.cpython-39-x86_64-linux-gnu.so: undefined symbol: _PyUnicode_DecodeUnicodeEscape
12

The error can be easily reproduced with:

1black....................................................................Failed
2- hook id: black
3- exit code: 1
4Traceback (most recent call last):
5  File &quot;../.cache/pre-commit/repol9drvp84/py_env-python3/bin/black&quot;, line 5, in &lt;module&gt;
6    from black import patched_main
7  File &quot;../.cache/pre-commit/repol9drvp84/py_env-python3/lib/python3.9/site-packages/black/__init__.py&quot;, line 52, in &lt;module&gt;
8    from typed_ast import ast3, ast27
9  File &quot;../.cache/pre-commit/repol9drvp84/py_env-python3/lib/python3.9/site-packages/typed_ast/ast3.py&quot;, line 40, in &lt;module&gt;
10    from typed_ast import _ast3
11ImportError: ../.cache/pre-commit/repol9drvp84/py_env-python3/lib/python3.9/site-packages/typed_ast/_ast3.cpython-39-x86_64-linux-gnu.so: undefined symbol: _PyUnicode_DecodeUnicodeEscape
12% pip install typed_ast
13% python3 -c 'from typed_ast import ast3'
14Traceback (most recent call last):
15  File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt;
16ImportError: 
17/usr/lib/python3/dist-packages/typed_ast/_ast3.cpython-39-x86_64-linux-gnu.so: 
18undefined symbol: _PyUnicode_DecodeUnicodeEscape
19

Currently the only workaround is downgrading to python@3.9.7.

Is any other fix available?

See also https://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg1829077.html

ANSWER

Answered 2021-Nov-17 at 08:17

The initial error was a failing python Black pipeline. Black failed because it was pinned to an older version which now fails with python3.9.8.

Updating black to the latest version 21.10b0 fixed the error for me.

See also typed_ast issue #169:

For others who may find this in a search, I ran into this problem via black because I had black pinned to an older version. The current version of black appears to no longer use typed-ast and thus won't encounter this issue.

Update:

using the latest typed-ast version >=1.5.0 seem to work as well

e.g. pip install typed-ast --upgrade

Source https://stackoverflow.com/questions/69912264

QUESTION

Sbt-native-packager cannot connect to Docker daemon

Asked 2021-Nov-01 at 22:24

Here is my configuration which worked for more than one year but suddenly stopped working.

1variables:
2  DOCKER_DRIVER: overlay2
3  DOCKER_TLS_CERTDIR: &quot;&quot;
4
5
6  stage: deploy
7  image: &quot;hseeberger/scala-sbt:11.0.9.1_1.4.4_2.13.4&quot;
8  before_script:
9    - apt-get update
10    - apt-get install sudo
11    - apt-get install apt-transport-https ca-certificates curl software-properties-common -y
12    - curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
13    - sudo add-apt-repository &quot;deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable&quot;
14    - apt-get update
15    - apt-get install docker-ce -y
16    - sudo service docker start
17    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
18
19  script:
20    - sbt docker:publishLocal
21

The error in GitlabCI is the following:

1variables:
2  DOCKER_DRIVER: overlay2
3  DOCKER_TLS_CERTDIR: &quot;&quot;
4
5
6  stage: deploy
7  image: &quot;hseeberger/scala-sbt:11.0.9.1_1.4.4_2.13.4&quot;
8  before_script:
9    - apt-get update
10    - apt-get install sudo
11    - apt-get install apt-transport-https ca-certificates curl software-properties-common -y
12    - curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
13    - sudo add-apt-repository &quot;deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable&quot;
14    - apt-get update
15    - apt-get install docker-ce -y
16    - sudo service docker start
17    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
18
19  script:
20    - sbt docker:publishLocal
21[warn] [1] sbt-native-packager wasn't able to identify the docker version. Some features may not be enabled
22[warn] sbt-native packager tries to parse the `docker version` output. This can fail if
23[warn] 
24[warn]   - the output has changed:
25[warn]     $ docker version --format '{{.Server.APIVersion}}'
26[warn] 
27[warn]   - no `docker` executable is available
28[warn]     $ which docker
29[warn] 
30[warn]   - you have not the required privileges to run `docker`
31[warn] 
32[warn] You can display the parsed docker version in the sbt console with:
33[warn] 
34[warn]   sbt:your-project&gt; show dockerApiVersion
35[warn] 
36[warn] As a last resort you could hard code the docker version, but it's not recommended!!
37[warn] 
38[warn]   import com.typesafe.sbt.packager.docker.DockerApiVersion
39[warn]   dockerApiVersion := Some(DockerApiVersion(1, 40))
40[warn]           
41[success] All package validations passed
42[error] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
43[info] Removing intermediate image(s) (labeled &quot;snp-multi-stage-id=9da90b0c-75e0-4f46-98eb-a17a1998a3b8&quot;) 
44[error] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
45[error] Something went wrong while removing multi-stage intermediate image(s)
46[error] java.lang.RuntimeException: Nonzero exit value: 1
47[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.publishLocalDocker(DockerPlugin.scala:687)
48[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.$anonfun$projectSettings$41(DockerPlugin.scala:266)
49[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.$anonfun$projectSettings$41$adapted(DockerPlugin.scala:258)
50[error]     at scala.Function1.$anonfun$compose$1(Function1.scala:49)
51[error]     at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:62)
52[error]     at sbt.std.Transform$$anon$4.work(Transform.scala:68)
53[error]     at sbt.Execute.$anonfun$submit$2(Execute.scala:282)
54[error]     at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:23)
55[error]     at sbt.Execute.work(Execute.scala:291)
56[error]     at sbt.Execute.$anonfun$submit$1(Execute.scala:282)
57[error]     at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:265)
58[error]     at sbt.CompletionService$$anon$2.call(CompletionService.scala:64)
59[error]     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
60[error]     at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
61[error]     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
62[error]     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
63[error]     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
64[error]     at java.base/java.lang.Thread.run(Thread.java:834)
65[error] (Docker / publishLocal) Nonzero exit value: 1
66

ANSWER

Answered 2021-Aug-16 at 16:16

It looks like you trying to run the docker daemon inside your build image docker run.

For Linux, you need to make sure that the current user (the one running sbt), has the proper permissions to run docker commands with some post-install steps.

Maybe you could fix your script by running sudo sbt docker:publishLocal instead?

It is more common now to use a service to have a docker daemon already set up for your builds:

1variables:
2  DOCKER_DRIVER: overlay2
3  DOCKER_TLS_CERTDIR: &quot;&quot;
4
5
6  stage: deploy
7  image: &quot;hseeberger/scala-sbt:11.0.9.1_1.4.4_2.13.4&quot;
8  before_script:
9    - apt-get update
10    - apt-get install sudo
11    - apt-get install apt-transport-https ca-certificates curl software-properties-common -y
12    - curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
13    - sudo add-apt-repository &quot;deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable&quot;
14    - apt-get update
15    - apt-get install docker-ce -y
16    - sudo service docker start
17    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
18
19  script:
20    - sbt docker:publishLocal
21[warn] [1] sbt-native-packager wasn't able to identify the docker version. Some features may not be enabled
22[warn] sbt-native packager tries to parse the `docker version` output. This can fail if
23[warn] 
24[warn]   - the output has changed:
25[warn]     $ docker version --format '{{.Server.APIVersion}}'
26[warn] 
27[warn]   - no `docker` executable is available
28[warn]     $ which docker
29[warn] 
30[warn]   - you have not the required privileges to run `docker`
31[warn] 
32[warn] You can display the parsed docker version in the sbt console with:
33[warn] 
34[warn]   sbt:your-project&gt; show dockerApiVersion
35[warn] 
36[warn] As a last resort you could hard code the docker version, but it's not recommended!!
37[warn] 
38[warn]   import com.typesafe.sbt.packager.docker.DockerApiVersion
39[warn]   dockerApiVersion := Some(DockerApiVersion(1, 40))
40[warn]           
41[success] All package validations passed
42[error] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
43[info] Removing intermediate image(s) (labeled &quot;snp-multi-stage-id=9da90b0c-75e0-4f46-98eb-a17a1998a3b8&quot;) 
44[error] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
45[error] Something went wrong while removing multi-stage intermediate image(s)
46[error] java.lang.RuntimeException: Nonzero exit value: 1
47[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.publishLocalDocker(DockerPlugin.scala:687)
48[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.$anonfun$projectSettings$41(DockerPlugin.scala:266)
49[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.$anonfun$projectSettings$41$adapted(DockerPlugin.scala:258)
50[error]     at scala.Function1.$anonfun$compose$1(Function1.scala:49)
51[error]     at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:62)
52[error]     at sbt.std.Transform$$anon$4.work(Transform.scala:68)
53[error]     at sbt.Execute.$anonfun$submit$2(Execute.scala:282)
54[error]     at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:23)
55[error]     at sbt.Execute.work(Execute.scala:291)
56[error]     at sbt.Execute.$anonfun$submit$1(Execute.scala:282)
57[error]     at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:265)
58[error]     at sbt.CompletionService$$anon$2.call(CompletionService.scala:64)
59[error]     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
60[error]     at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
61[error]     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
62[error]     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
63[error]     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
64[error]     at java.base/java.lang.Thread.run(Thread.java:834)
65[error] (Docker / publishLocal) Nonzero exit value: 1
66services:
67  - docker:dind
68

See this example on gitlab. There is also a section in the (EE) docs.

Source https://stackoverflow.com/questions/68683399

QUESTION

Different behaviours when initializing differently

Asked 2021-Nov-01 at 14:28

When trying to initialize a Vector using the result of some operation in Eigen, the result seems to be different depending on what syntax is used, i.e., on my machine, the assertion at the end of the following code fails:

1const unsigned int n = 25;
2Eigen::MatrixXd R = Eigen::MatrixXd::Random(n,n);
3Eigen::VectorXd b = Eigen::VectorXd::Random(n);
4
5Eigen::VectorXd x = Eigen::VectorXd::Zero(n);
6Eigen::VectorXd y = Eigen::VectorXd::Zero(n);
7
8y = R.triangularView&lt;Eigen::Upper&gt;().solve(b);
9x &lt;&lt; R.triangularView&lt;Eigen::Upper&gt;().solve(b);
10
11assert((x-y).norm() &lt; std::numeric_limits&lt;double&gt;::epsilon()*10E6);
12

I am aware of potential rounding errors, but in my understanding, the two evaluations of R.triangularView<Eigen::Upper>().solve(b) should have the exact same precision errors, and therefore the same result. This also only happens when initializing one variable with <<and the other withoperator=, but not if both variables are assigned to the same way.

When not using only backwards substitution on the upper triangular part but evaluating R.lu().solve(b)on both and comparing the result, the difference is far smaller, but still exists. Why are the two vectors different if assigned in nearly the same, deterministic way?

I tried this code on Arch Linux and Debian with a x86-64 architecture, using Eigen Version 3.4.0, with C++11, C++17, C++20, compiled with both clang and gcc.

ANSWER

Answered 2021-Oct-05 at 15:31

The condition number of the matrix that defines the linear system you are solving is at the order of 10⁷. Roughly speaking, this means that after solving this system numerically the last 7 digits will be incorrect. Thus, leaving you with roughly 9 correct digits or an error of around 10⁻⁹. It seems like

1const unsigned int n = 25;
2Eigen::MatrixXd R = Eigen::MatrixXd::Random(n,n);
3Eigen::VectorXd b = Eigen::VectorXd::Random(n);
4
5Eigen::VectorXd x = Eigen::VectorXd::Zero(n);
6Eigen::VectorXd y = Eigen::VectorXd::Zero(n);
7
8y = R.triangularView&lt;Eigen::Upper&gt;().solve(b);
9x &lt;&lt; R.triangularView&lt;Eigen::Upper&gt;().solve(b);
10
11assert((x-y).norm() &lt; std::numeric_limits&lt;double&gt;::epsilon()*10E6);
12y = R.triangularView&lt;Eigen::Upper&gt;().solve(b);
13x &lt;&lt; R.triangularView&lt;Eigen::Upper&gt;().solve(b);
14

produce slightly different machine codes. Since your matrix is that illconditioned we expect an error of the order of 10⁻⁹. Or in other words, that the computed solutions differ by around 10⁻⁹.

You can verify the behavior using the code below. If you activate the line

1const unsigned int n = 25;
2Eigen::MatrixXd R = Eigen::MatrixXd::Random(n,n);
3Eigen::VectorXd b = Eigen::VectorXd::Random(n);
4
5Eigen::VectorXd x = Eigen::VectorXd::Zero(n);
6Eigen::VectorXd y = Eigen::VectorXd::Zero(n);
7
8y = R.triangularView&lt;Eigen::Upper&gt;().solve(b);
9x &lt;&lt; R.triangularView&lt;Eigen::Upper&gt;().solve(b);
10
11assert((x-y).norm() &lt; std::numeric_limits&lt;double&gt;::epsilon()*10E6);
12y = R.triangularView&lt;Eigen::Upper&gt;().solve(b);
13x &lt;&lt; R.triangularView&lt;Eigen::Upper&gt;().solve(b);
14R += 10*MatrixXd::Identity(n,n);
15

you decrease the condition number of the matrix, by adding a diagonal term, and hence the error is significantly reduced.

1const unsigned int n = 25;
2Eigen::MatrixXd R = Eigen::MatrixXd::Random(n,n);
3Eigen::VectorXd b = Eigen::VectorXd::Random(n);
4
5Eigen::VectorXd x = Eigen::VectorXd::Zero(n);
6Eigen::VectorXd y = Eigen::VectorXd::Zero(n);
7
8y = R.triangularView&lt;Eigen::Upper&gt;().solve(b);
9x &lt;&lt; R.triangularView&lt;Eigen::Upper&gt;().solve(b);
10
11assert((x-y).norm() &lt; std::numeric_limits&lt;double&gt;::epsilon()*10E6);
12y = R.triangularView&lt;Eigen::Upper&gt;().solve(b);
13x &lt;&lt; R.triangularView&lt;Eigen::Upper&gt;().solve(b);
14R += 10*MatrixXd::Identity(n,n);
15#include &lt;iostream&gt;
16#include &lt;Eigen/Dense&gt;
17#include &lt;Eigen/SVD&gt;
18
19using Eigen::MatrixXd;
20using Eigen::VectorXd;
21using Eigen::BDCSVD;
22
23int main()
24{
25  const unsigned int n = 25;
26  MatrixXd R = MatrixXd::Random(n,n);
27  VectorXd b = VectorXd::Random(n);
28
29  VectorXd x = VectorXd::Zero(n);
30  VectorXd y = VectorXd::Zero(n);
31
32  // Uncomment to reduce the condition number
33  // R += 10*MatrixXd::Identity(n,n);
34
35  y = R.triangularView&lt;Eigen::Upper&gt;().solve(b);
36  x &lt;&lt; R.triangularView&lt;Eigen::Upper&gt;().solve(b);
37
38  std::cout &lt;&lt; &quot;res(x): &quot; &lt;&lt; (b - R.triangularView&lt;Eigen::Upper&gt;() * x).norm() &lt;&lt; std::endl;
39  std::cout &lt;&lt; &quot;res(y): &quot; &lt;&lt; (b - R.triangularView&lt;Eigen::Upper&gt;() * y).norm() &lt;&lt; std::endl;
40
41  Eigen::BDCSVD&lt;Eigen::MatrixXd&gt; svd(R.triangularView&lt;Eigen::Upper&gt;());
42  VectorXd sigma = svd.singularValues();
43  std::cout &lt;&lt; &quot;cond: &quot; &lt;&lt; sigma.maxCoeff() / sigma.minCoeff() &lt;&lt; std::endl;
44
45  std::cout &lt;&lt; &quot;error norm: &quot; &lt;&lt; (x-y).norm() &lt;&lt; std::endl;
46  std::cout &lt;&lt; &quot;error max: &quot; &lt;&lt; (x-y).lpNorm&lt;Eigen::Infinity&gt;() &lt;&lt; std::endl;
47
48  return 0;
49}
50

Note that Eigen heavily relies on function inlining and compiler optimization. For each call to the solve function the compiler generates an optimized solve function depending on the context. Hence, operator<< and operator= might allow for different optimizations and hence lead to different machine codes. At least with my compiler, if you compute

1const unsigned int n = 25;
2Eigen::MatrixXd R = Eigen::MatrixXd::Random(n,n);
3Eigen::VectorXd b = Eigen::VectorXd::Random(n);
4
5Eigen::VectorXd x = Eigen::VectorXd::Zero(n);
6Eigen::VectorXd y = Eigen::VectorXd::Zero(n);
7
8y = R.triangularView&lt;Eigen::Upper&gt;().solve(b);
9x &lt;&lt; R.triangularView&lt;Eigen::Upper&gt;().solve(b);
10
11assert((x-y).norm() &lt; std::numeric_limits&lt;double&gt;::epsilon()*10E6);
12y = R.triangularView&lt;Eigen::Upper&gt;().solve(b);
13x &lt;&lt; R.triangularView&lt;Eigen::Upper&gt;().solve(b);
14R += 10*MatrixXd::Identity(n,n);
15#include &lt;iostream&gt;
16#include &lt;Eigen/Dense&gt;
17#include &lt;Eigen/SVD&gt;
18
19using Eigen::MatrixXd;
20using Eigen::VectorXd;
21using Eigen::BDCSVD;
22
23int main()
24{
25  const unsigned int n = 25;
26  MatrixXd R = MatrixXd::Random(n,n);
27  VectorXd b = VectorXd::Random(n);
28
29  VectorXd x = VectorXd::Zero(n);
30  VectorXd y = VectorXd::Zero(n);
31
32  // Uncomment to reduce the condition number
33  // R += 10*MatrixXd::Identity(n,n);
34
35  y = R.triangularView&lt;Eigen::Upper&gt;().solve(b);
36  x &lt;&lt; R.triangularView&lt;Eigen::Upper&gt;().solve(b);
37
38  std::cout &lt;&lt; &quot;res(x): &quot; &lt;&lt; (b - R.triangularView&lt;Eigen::Upper&gt;() * x).norm() &lt;&lt; std::endl;
39  std::cout &lt;&lt; &quot;res(y): &quot; &lt;&lt; (b - R.triangularView&lt;Eigen::Upper&gt;() * y).norm() &lt;&lt; std::endl;
40
41  Eigen::BDCSVD&lt;Eigen::MatrixXd&gt; svd(R.triangularView&lt;Eigen::Upper&gt;());
42  VectorXd sigma = svd.singularValues();
43  std::cout &lt;&lt; &quot;cond: &quot; &lt;&lt; sigma.maxCoeff() / sigma.minCoeff() &lt;&lt; std::endl;
44
45  std::cout &lt;&lt; &quot;error norm: &quot; &lt;&lt; (x-y).norm() &lt;&lt; std::endl;
46  std::cout &lt;&lt; &quot;error max: &quot; &lt;&lt; (x-y).lpNorm&lt;Eigen::Infinity&gt;() &lt;&lt; std::endl;
47
48  return 0;
49}
50 x &lt;&lt; VectorXd(R.triangularView&lt;Eigen::Upper&gt;().solve(b));
51

the values for x and y agree.

Source https://stackoverflow.com/questions/69450136

QUESTION

Why can't add file handler with the form of self.fh in the init method?

Asked 2021-Oct-20 at 03:28

os and python info:

1uname -a
2Linux debian 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
3python3 --version
4Python 3.9.2
5

Here is a simple class which can start multiprocessing.

1uname -a
2Linux debian 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
3python3 --version
4Python 3.9.2
5from multiprocessing.pool import Pool    
6
7class my_mp(object):
8    def __init__(self):
9        self.process_num = 3
10        fh = open('test.txt', 'w')
11    def run_task(self,i):
12        print('process {} start'.format(str(i)))
13        time.sleep(2)
14        print('process {} end'.format(str(i)))
15    def run(self):
16        pool = Pool(processes = self.process_num)
17        for i in range(self.process_num):
18            pool.apply_async(self.run_task,args = (i,))
19        pool.close()
20        pool.join()
21

Initialize the my_mp class,then start multiprocess.

1uname -a
2Linux debian 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
3python3 --version
4Python 3.9.2
5from multiprocessing.pool import Pool    
6
7class my_mp(object):
8    def __init__(self):
9        self.process_num = 3
10        fh = open('test.txt', 'w')
11    def run_task(self,i):
12        print('process {} start'.format(str(i)))
13        time.sleep(2)
14        print('process {} end'.format(str(i)))
15    def run(self):
16        pool = Pool(processes = self.process_num)
17        for i in range(self.process_num):
18            pool.apply_async(self.run_task,args = (i,))
19        pool.close()
20        pool.join()
21ins = my_mp()
22ins.run()
23process 0 start
24process 1 start
25process 2 start
26process 0 end
27process 2 end
28process 1 end
29

Now replace fh = open('test.txt', 'w') with self.fh = open('test.txt', 'w') in my_mp class and try again.

1uname -a
2Linux debian 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
3python3 --version
4Python 3.9.2
5from multiprocessing.pool import Pool    
6
7class my_mp(object):
8    def __init__(self):
9        self.process_num = 3
10        fh = open('test.txt', 'w')
11    def run_task(self,i):
12        print('process {} start'.format(str(i)))
13        time.sleep(2)
14        print('process {} end'.format(str(i)))
15    def run(self):
16        pool = Pool(processes = self.process_num)
17        for i in range(self.process_num):
18            pool.apply_async(self.run_task,args = (i,))
19        pool.close()
20        pool.join()
21ins = my_mp()
22ins.run()
23process 0 start
24process 1 start
25process 2 start
26process 0 end
27process 2 end
28process 1 end
29ins = my_mp()
30ins.run()    
31

No output!Why no process start?

1uname -a
2Linux debian 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
3python3 --version
4Python 3.9.2
5from multiprocessing.pool import Pool    
6
7class my_mp(object):
8    def __init__(self):
9        self.process_num = 3
10        fh = open('test.txt', 'w')
11    def run_task(self,i):
12        print('process {} start'.format(str(i)))
13        time.sleep(2)
14        print('process {} end'.format(str(i)))
15    def run(self):
16        pool = Pool(processes = self.process_num)
17        for i in range(self.process_num):
18            pool.apply_async(self.run_task,args = (i,))
19        pool.close()
20        pool.join()
21ins = my_mp()
22ins.run()
23process 0 start
24process 1 start
25process 2 start
26process 0 end
27process 2 end
28process 1 end
29ins = my_mp()
30ins.run()    
31&gt;&gt;&gt; from multiprocessing.pool import Pool    
32&gt;&gt;&gt; 
33&gt;&gt;&gt; class my_mp(object):
34...     def __init__(self):
35...         self.process_num = 3
36...         fh = open('test.txt', 'w')
37...     def run_task(self,i):
38...         print('process {} start'.format(str(i)))
39...         time.sleep(2)
40...         print('process {} end'.format(str(i)))
41...     def run(self):
42...         pool = Pool(processes = self.process_num)
43...         for i in range(self.process_num):
44...             pool.apply_async(self.run_task,args = (i,))
45...         pool.close()
46...         pool.join()
47... 
48&gt;&gt;&gt; x = my_mp()
49&gt;&gt;&gt; x.run()
50process 0 start
51process 1 start
52process 2 start
53process 2 end
54process 0 end
55process 1 end
56&gt;&gt;&gt; class my_mp(object):
57...     def __init__(self):
58...         self.process_num = 3
59...         self.fh = open('test.txt', 'w')
60...     def run_task(self,i):
61...         print('process {} start'.format(str(i)))
62...         time.sleep(2)
63...         print('process {} end'.format(str(i)))
64...     def run(self):
65...         pool = Pool(processes = self.process_num)
66...         for i in range(self.process_num):
67...             pool.apply_async(self.run_task,args = (i,))
68...         pool.close()
69...         pool.join()
70... 
71&gt;&gt;&gt; x = my_mp()
72&gt;&gt;&gt; x.run()
73&gt;&gt;&gt; x.run()
74&gt;&gt;&gt; x = my_mp()
75&gt;&gt;&gt; class my_mp(object):
76...     def __init__(self):
77...         self.process_num = 3
78...         fh = open('test.txt', 'w')
79...         self.fh = fh
80...     def run_task(self,i):
81...         print('process {} start'.format(str(i)))
82...         time.sleep(2)
83...         print('process {} end'.format(str(i)))
84...     def run(self):
85...         pool = Pool(processes = self.process_num)
86...         for i in range(self.process_num):
87...             pool.apply_async(self.run_task,args = (i,))
88...         pool.close()
89...         pool.join()
90... 
91&gt;&gt;&gt; x = my_mp()
92&gt;&gt;&gt; x.run()
93&gt;&gt;&gt; 
94

Why can't add file handler with the form of self.fh in the __init__ method?I have never called the file handler defined in __init__ in any process.

ANSWER

Answered 2021-Oct-12 at 01:57

I did some investigation, but it does not fully answer the question. I am going to post the results here in case if they help somebody else.

First, if the subprocess fails, there is no traceback. So I added the additional line to display the output of subprocesses. It should be None if no errors occur. The new code:

1uname -a
2Linux debian 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
3python3 --version
4Python 3.9.2
5from multiprocessing.pool import Pool    
6
7class my_mp(object):
8    def __init__(self):
9        self.process_num = 3
10        fh = open('test.txt', 'w')
11    def run_task(self,i):
12        print('process {} start'.format(str(i)))
13        time.sleep(2)
14        print('process {} end'.format(str(i)))
15    def run(self):
16        pool = Pool(processes = self.process_num)
17        for i in range(self.process_num):
18            pool.apply_async(self.run_task,args = (i,))
19        pool.close()
20        pool.join()
21ins = my_mp()
22ins.run()
23process 0 start
24process 1 start
25process 2 start
26process 0 end
27process 2 end
28process 1 end
29ins = my_mp()
30ins.run()    
31&gt;&gt;&gt; from multiprocessing.pool import Pool    
32&gt;&gt;&gt; 
33&gt;&gt;&gt; class my_mp(object):
34...     def __init__(self):
35...         self.process_num = 3
36...         fh = open('test.txt', 'w')
37...     def run_task(self,i):
38...         print('process {} start'.format(str(i)))
39...         time.sleep(2)
40...         print('process {} end'.format(str(i)))
41...     def run(self):
42...         pool = Pool(processes = self.process_num)
43...         for i in range(self.process_num):
44...             pool.apply_async(self.run_task,args = (i,))
45...         pool.close()
46...         pool.join()
47... 
48&gt;&gt;&gt; x = my_mp()
49&gt;&gt;&gt; x.run()
50process 0 start
51process 1 start
52process 2 start
53process 2 end
54process 0 end
55process 1 end
56&gt;&gt;&gt; class my_mp(object):
57...     def __init__(self):
58...         self.process_num = 3
59...         self.fh = open('test.txt', 'w')
60...     def run_task(self,i):
61...         print('process {} start'.format(str(i)))
62...         time.sleep(2)
63...         print('process {} end'.format(str(i)))
64...     def run(self):
65...         pool = Pool(processes = self.process_num)
66...         for i in range(self.process_num):
67...             pool.apply_async(self.run_task,args = (i,))
68...         pool.close()
69...         pool.join()
70... 
71&gt;&gt;&gt; x = my_mp()
72&gt;&gt;&gt; x.run()
73&gt;&gt;&gt; x.run()
74&gt;&gt;&gt; x = my_mp()
75&gt;&gt;&gt; class my_mp(object):
76...     def __init__(self):
77...         self.process_num = 3
78...         fh = open('test.txt', 'w')
79...         self.fh = fh
80...     def run_task(self,i):
81...         print('process {} start'.format(str(i)))
82...         time.sleep(2)
83...         print('process {} end'.format(str(i)))
84...     def run(self):
85...         pool = Pool(processes = self.process_num)
86...         for i in range(self.process_num):
87...             pool.apply_async(self.run_task,args = (i,))
88...         pool.close()
89...         pool.join()
90... 
91&gt;&gt;&gt; x = my_mp()
92&gt;&gt;&gt; x.run()
93&gt;&gt;&gt; 
94        for i in range(3):
95            res = pool.apply_async(self.run_task, args=(i,))
96            print(res.get())
97

The output

1uname -a
2Linux debian 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
3python3 --version
4Python 3.9.2
5from multiprocessing.pool import Pool    
6
7class my_mp(object):
8    def __init__(self):
9        self.process_num = 3
10        fh = open('test.txt', 'w')
11    def run_task(self,i):
12        print('process {} start'.format(str(i)))
13        time.sleep(2)
14        print('process {} end'.format(str(i)))
15    def run(self):
16        pool = Pool(processes = self.process_num)
17        for i in range(self.process_num):
18            pool.apply_async(self.run_task,args = (i,))
19        pool.close()
20        pool.join()
21ins = my_mp()
22ins.run()
23process 0 start
24process 1 start
25process 2 start
26process 0 end
27process 2 end
28process 1 end
29ins = my_mp()
30ins.run()    
31&gt;&gt;&gt; from multiprocessing.pool import Pool    
32&gt;&gt;&gt; 
33&gt;&gt;&gt; class my_mp(object):
34...     def __init__(self):
35...         self.process_num = 3
36...         fh = open('test.txt', 'w')
37...     def run_task(self,i):
38...         print('process {} start'.format(str(i)))
39...         time.sleep(2)
40...         print('process {} end'.format(str(i)))
41...     def run(self):
42...         pool = Pool(processes = self.process_num)
43...         for i in range(self.process_num):
44...             pool.apply_async(self.run_task,args = (i,))
45...         pool.close()
46...         pool.join()
47... 
48&gt;&gt;&gt; x = my_mp()
49&gt;&gt;&gt; x.run()
50process 0 start
51process 1 start
52process 2 start
53process 2 end
54process 0 end
55process 1 end
56&gt;&gt;&gt; class my_mp(object):
57...     def __init__(self):
58...         self.process_num = 3
59...         self.fh = open('test.txt', 'w')
60...     def run_task(self,i):
61...         print('process {} start'.format(str(i)))
62...         time.sleep(2)
63...         print('process {} end'.format(str(i)))
64...     def run(self):
65...         pool = Pool(processes = self.process_num)
66...         for i in range(self.process_num):
67...             pool.apply_async(self.run_task,args = (i,))
68...         pool.close()
69...         pool.join()
70... 
71&gt;&gt;&gt; x = my_mp()
72&gt;&gt;&gt; x.run()
73&gt;&gt;&gt; x.run()
74&gt;&gt;&gt; x = my_mp()
75&gt;&gt;&gt; class my_mp(object):
76...     def __init__(self):
77...         self.process_num = 3
78...         fh = open('test.txt', 'w')
79...         self.fh = fh
80...     def run_task(self,i):
81...         print('process {} start'.format(str(i)))
82...         time.sleep(2)
83...         print('process {} end'.format(str(i)))
84...     def run(self):
85...         pool = Pool(processes = self.process_num)
86...         for i in range(self.process_num):
87...             pool.apply_async(self.run_task,args = (i,))
88...         pool.close()
89...         pool.join()
90... 
91&gt;&gt;&gt; x = my_mp()
92&gt;&gt;&gt; x.run()
93&gt;&gt;&gt; 
94        for i in range(3):
95            res = pool.apply_async(self.run_task, args=(i,))
96            print(res.get())
97Traceback (most recent call last):
98  File &quot;C:/temp/LeetCode-solutions/multithreading.py&quot;, line 43, in &lt;module&gt;
99    mp.run()
100  File &quot;C:/temp/LeetCode-solutions/multithreading.py&quot;, line 19, in run
101    self.multiprocessing()
102  File &quot;C:/temp/LeetCode-solutions/multithreading.py&quot;, line 30, in multiprocessing
103    print(res.get())
104  File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\multiprocessing\pool.py&quot;, line 771, in get
105    raise self._value
106  File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\multiprocessing\pool.py&quot;, line 537, in _handle_tasks
107    put(task)
108  File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\multiprocessing\connection.py&quot;, line 206, in send
109    self._send_bytes(_ForkingPickler.dumps(obj))
110  File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\multiprocessing\reduction.py&quot;, line 51, in dumps
111    cls(buf, protocol).dump(obj)
112TypeError: cannot pickle '_io.TextIOWrapper' object
113

It seems that the program gets the file object as part of the argument to self.run_task. The error has a long history on StackOverflow, but the best explanation IMO is here: https://discuss.python.org/t/use-multiprocessing-module-to-handle-a-large-file-in-python/6604

I didn't find why having an attribute that is file object makes all the attributes of the class file objects, but I hope that unveils some mystery.

Final test: the following code works as expected

1uname -a
2Linux debian 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
3python3 --version
4Python 3.9.2
5from multiprocessing.pool import Pool    
6
7class my_mp(object):
8    def __init__(self):
9        self.process_num = 3
10        fh = open('test.txt', 'w')
11    def run_task(self,i):
12        print('process {} start'.format(str(i)))
13        time.sleep(2)
14        print('process {} end'.format(str(i)))
15    def run(self):
16        pool = Pool(processes = self.process_num)
17        for i in range(self.process_num):
18            pool.apply_async(self.run_task,args = (i,))
19        pool.close()
20        pool.join()
21ins = my_mp()
22ins.run()
23process 0 start
24process 1 start
25process 2 start
26process 0 end
27process 2 end
28process 1 end
29ins = my_mp()
30ins.run()    
31&gt;&gt;&gt; from multiprocessing.pool import Pool    
32&gt;&gt;&gt; 
33&gt;&gt;&gt; class my_mp(object):
34...     def __init__(self):
35...         self.process_num = 3
36...         fh = open('test.txt', 'w')
37...     def run_task(self,i):
38...         print('process {} start'.format(str(i)))
39...         time.sleep(2)
40...         print('process {} end'.format(str(i)))
41...     def run(self):
42...         pool = Pool(processes = self.process_num)
43...         for i in range(self.process_num):
44...             pool.apply_async(self.run_task,args = (i,))
45...         pool.close()
46...         pool.join()
47... 
48&gt;&gt;&gt; x = my_mp()
49&gt;&gt;&gt; x.run()
50process 0 start
51process 1 start
52process 2 start
53process 2 end
54process 0 end
55process 1 end
56&gt;&gt;&gt; class my_mp(object):
57...     def __init__(self):
58...         self.process_num = 3
59...         self.fh = open('test.txt', 'w')
60...     def run_task(self,i):
61...         print('process {} start'.format(str(i)))
62...         time.sleep(2)
63...         print('process {} end'.format(str(i)))
64...     def run(self):
65...         pool = Pool(processes = self.process_num)
66...         for i in range(self.process_num):
67...             pool.apply_async(self.run_task,args = (i,))
68...         pool.close()
69...         pool.join()
70... 
71&gt;&gt;&gt; x = my_mp()
72&gt;&gt;&gt; x.run()
73&gt;&gt;&gt; x.run()
74&gt;&gt;&gt; x = my_mp()
75&gt;&gt;&gt; class my_mp(object):
76...     def __init__(self):
77...         self.process_num = 3
78...         fh = open('test.txt', 'w')
79...         self.fh = fh
80...     def run_task(self,i):
81...         print('process {} start'.format(str(i)))
82...         time.sleep(2)
83...         print('process {} end'.format(str(i)))
84...     def run(self):
85...         pool = Pool(processes = self.process_num)
86...         for i in range(self.process_num):
87...             pool.apply_async(self.run_task,args = (i,))
88...         pool.close()
89...         pool.join()
90... 
91&gt;&gt;&gt; x = my_mp()
92&gt;&gt;&gt; x.run()
93&gt;&gt;&gt; 
94        for i in range(3):
95            res = pool.apply_async(self.run_task, args=(i,))
96            print(res.get())
97Traceback (most recent call last):
98  File &quot;C:/temp/LeetCode-solutions/multithreading.py&quot;, line 43, in &lt;module&gt;
99    mp.run()
100  File &quot;C:/temp/LeetCode-solutions/multithreading.py&quot;, line 19, in run
101    self.multiprocessing()
102  File &quot;C:/temp/LeetCode-solutions/multithreading.py&quot;, line 30, in multiprocessing
103    print(res.get())
104  File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\multiprocessing\pool.py&quot;, line 771, in get
105    raise self._value
106  File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\multiprocessing\pool.py&quot;, line 537, in _handle_tasks
107    put(task)
108  File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\multiprocessing\connection.py&quot;, line 206, in send
109    self._send_bytes(_ForkingPickler.dumps(obj))
110  File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\multiprocessing\reduction.py&quot;, line 51, in dumps
111    cls(buf, protocol).dump(obj)
112TypeError: cannot pickle '_io.TextIOWrapper' object
113from multiprocessing.pool import Pool
114import time
115
116
117class MyMP(object):
118    def __init__(self):
119        self.process_num = 3
120
121    def run(self):
122        self.fh = open('test.txt', 'w')
123        pool = Pool(processes=3)
124        for i in range(3):
125            res = pool.apply_async(run_task, args=(i,))
126            print(res.get())
127        pool.close()
128        pool.join()
129        self.fh.close()
130
131def run_task(i):
132    print('process {} start'.format(str(i)))
133    time.sleep(2)
134    print('process {} end'.format(str(i)))
135

Source https://stackoverflow.com/questions/69507269

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Debian

Tutorials and Learning Resources are not available at this moment for Debian

Share this Page

share link

Get latest updates on Debian