pieces | An experimental BitTorrent client in Python | Stream Processing library
kandi X-RAY | pieces Summary
kandi X-RAY | pieces Summary
An experimental BitTorrent client implemented in Python 3 using asyncio. The client is not a practical BitTorrent client, it lacks too many features to really be useful. It was implemented for fun in order to learn more about BitTorrent as well as Python's asyncio library. See for a walkthrough on the BitTorrent protocol and how pieces works under the hood. The client currently only support downloading of data, although adding the remaining features regarding seeding and multi-file should not be that hard. Even though it's not practical at this point, feel free to learn from it, steal from it, improve it, laugh at it or just ignore it.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Start the server
- Perform a handshake
- Close the peer
- Update peer index
- Add a peer
- Called when a block is received
- Returns True if the piece has the same hash
- Returns True if all blocks have completed
- Reset all blocks
- Start download
- Connect to tracker
- Raise ConnectionError
- Stop the stream
- Get the next request for a given peer
- Check for expired blocks
- Get a random piece from a peer
- Returns the next pending request
- Returns a list of peers
- Decode a port number
- Close this peer
- Stops the loop
- Encode the data as a byte string
- Encode the next type
- Encode a dictionary
- Encode the given value
pieces Key Features
pieces Examples and Code Snippets
public static boolean validate(Piece[][] solution) {
if (solution == null) return false;
for (int r = 0; r < solution.length; r++) {
for (int c = 0; c < solution[r].length; c++) {
Piece piece = solution[r][c];
if (piece == null)
def _ragged_split(tensor, pieces):
"""Like split for 1D tensors but allows case where len % pieces != 0.
Args:
tensor: `tf.Tensor` that must be 1D.
pieces: a positive integer specifying the number of pieces into which
tensor should
private String getParsedPieces(URI uri) {
logger.info("*** List of parsed pieces ***");
URISCHEME = uri.getScheme();
logger.info("URISCHEME: " + URISCHEME);
URISCHEMESPECIFIC = uri.getSchemeSpecificPart();
logg
Community Discussions
Trending Discussions on pieces
QUESTION
The following code runs correctly.
...ANSWER
Answered 2022-Apr-02 at 18:41This is a concept in Rust called temporary lifetime extension. There are rules that govern when a temporary's lifetime is extended, though. It doesn't happen any time a borrow of a temporary occurs; the expression on the right side of a let
statement needs to be a so-called "extending expression." This part of the document linked above neatly explains the first and third examples in your question:
So the borrow expressions in
&mut 0
,(&1, &mut 2)
, andSome { 0: &mut 3 }
are all extending expressions. The borrows in&0 + &1
andSome(&mut 0)
are not: the latter is syntactically a function call expression.
Since the expression assigned to a
in the third example isn't an extending expression, the lifetime of the temporary is not extended beyond the let statement, and that causes the lifetime problem.
Why does this work for Some(&3)
then? Because of constant promotion:
Promotion of a value expression to a
'static
slot occurs when the expression could be written in a constant and borrowed, and that borrow could be dereferenced where the expression was originally written, without changing the runtime behavior.
Since you borrow immutably, Rust allocates a hidden, static i32
and borrows that instead, giving you an Option<&'static i32>
, which is obviously valid for the whole life of the program itself. This is not technically temporary lifetime extension because after the i32
is promoted to have static lifetime it's no longer a temporary.
It's basically equivalent to this (except that HIDDEN
has no name):
QUESTION
I'm getting the following deprecation warning when running unit tests in a brand new Angular 12 application:
(node:14940) [log4js-node-DEP0004] DeprecationWarning: Pattern %d{DATE} is deprecated due to the confusion it causes when used. Please use %d{DATETIME} instead.
why log4js
prompts "karma" depends on it
. The warning itself is clear as to what should be done but there are two key missing pieces of information:
- it doesn't say when/if the old syntax will stop working
- it doesn't provide a workaround (other than forking
karma
and replacing the deprecated syntax with the new one - which I'm definitely not going to do).
Downgrading log4js
to an earlier version, which doesn't output the warning, using forceResolutions
doesn't seem like a good idea, especially since I've found a few github threads related to vulnerabilities in it, although karma doesn't seem to be affected.
The question: are there actionable paths for not getting the warning, or is "and now we wait" (for a karma
update) the only option?
Note: I've also asked it on karma's repo.
...ANSWER
Answered 2022-Feb-16 at 17:00Got the fix from karma
maintainers:
Update karma
(in package.json > devDependencies.karma
) to ^6.3.12
.
Warnings gone. Well done, karma
. That was fast!
QUESTION
These are two pieces of code that I ran under the C++11 standard. I expected the post-decrement of the iterator to produce the same effect, but these two pieces of code produce completely different results. Where is my understanding off?
...ANSWER
Answered 2022-Feb-08 at 15:38Your code invokes undefined behaviour.
The begin iterator is not decrementable and the behavior is undefined if --container.begin() is evaluated.
https://en.cppreference.com/w/cpp/named_req/BidirectionalIterator
As such, anything could happen.
QUESTION
I've built this new ggplot2
geom layer I'm calling geom_triangles
(see https://github.com/ctesta01/ggtriangles/) that plots isosceles triangles given aesthetics including x, y, z
where z
is the height of the triangle and
the base of the isosceles triangle has midpoint (x,y) on the graph.
What I want is for the geom_triangles()
layer to automatically provide legend components for the height and width of the triangles, but I am not sure how to do that.
I understand based on this reference that I may need to adjust the draw_key
argument in the ggproto
StatTriangles
object, but I'm not sure how I would do that and can't seem to find examples online of how to do it. I've been looking at the source code in ggplot2
for the draw_key
functions, but I'm not sure how I would introduce multiple legend components (one for each of height and width) in a single draw_key
argument in the StatTriangles
ggproto
.
ANSWER
Answered 2022-Jan-30 at 18:08I think you might be slightly overcomplicating things. Ideally, you'd just want a single key drawing method for the whole layer. However, because you're using a Stat
to do the majority of calculations, this becomes hairy to implement. In my answer, I'm avoiding this.
Let's say I'd want to use a geom-only implementation of such a layer. I can make the following (simplified) class/constructor pair. Below, I haven't bothered width_scale
or height_scale
parameters, just for simplicity.
QUESTION
The below code achieves desired output. Is there a more elegant way to do this?
For example, is there some native javascript function like flatMap
etc that would help?
(I know I could get rid of the intermediate variable pieces
).
ANSWER
Answered 2022-Jan-15 at 04:01It's more concise, and I think prettier, to pair fromEntries
with a map over .entries
.
QUESTION
I have a callback based API like this:
...ANSWER
Answered 2022-Jan-04 at 16:03and an extension function which converts the API into a hot flow
This extension looks correct, however the flow is not hot (nor should it be). It only registers a callback when an actual collection starts, and unregisters when the collector is cancelled (this includes when the collector uses terminal operators that limit the number of items, such as .first()
or .take(n)
).
This is quite an important note to keep in mind for your other questions.
In the absence of that SupervisorJob(), it appears that test will never end. Maybe collecting the flow never ends for some reason, which I don't understand
As mentioned above, due to how the flow is constructed (and how the CallbackApi
works), the flow collection cannot end by decision of the producer (the callback API). It can only stop by cancellation of the collector, which will also unregister the correponding callback (which is good).
The reason your custom job allows the test to end is probably because you're escaping structured concurrency by overriding the job in the context by a custom one that doesn't have that current job as parent. However you're likely still leaking that neverending coroutine from the scope that is never cancelled.
I'm feeding captured callback in a separate coroutine.
And that's correct, although I don't understand why you call removeListener
from this separate coroutine. What callback are you unregistering here? Note that this also cannot have any effect on the flow, because even if you could unregister the callback that was created in the callbackFlow
builder, it wouldn't magically close the channel of the callbackFlow
, so the flow wouldn't end anyway (which I'm assuming is what you tried to do here).
Also, unregistering the callback from outside would prevent you from checking it was actually unregistered by your production code.
2- If I remove the launch body which callbackSlot.captured.onResult(10) is inside it, test will fail with this error UninitializedPropertyAccessException: lateinit property captured has not been initialized. I would think that yield should start the flow.
yield()
is quite brittle. If you use it, you must be very conscious about how the code of each concurrent coroutine is currently written. The reason it's brittle is that it will only yield the thread to other coroutines until the next suspension point. You can't predict which coroutine will be executed when yielding, neither can you predict which one the thread will resume after reaching the suspension point. If there are a couple suspensions, all bets are off. If there are other running coroutines, all bets are off too.
A better approach is to use kotlinx-coroutines-test
which provides utilities like advanceUntilIdle
which makes sure other coroutines are all done or waiting on a suspension point.
Now how to fix this test? I can't test anything right now, but I would probably approach it this way:
- use
runTest
fromkotlinx-coroutines-test
instead ofrunBlocking
to control better when other coroutines run (and wait for instance for the flow collection to do something) - start the flow collection in a coroutine (just
launch
/launchIn(this)
without custom scope) and keep a handle to the launchedJob
(return value oflaunch
/launchIn
) - call the captured callback with a value,
advanceUntilIdle()
to ensure the flow collector's coroutine can handle it, and then assert that the list got the element (note: since everything is single threaded and the callback is not suspending, this would deadlock if there was no buffer, butcallbackFlow
uses a default buffer so it should be fine) - optional: repeat the above several times with different values and confirm they are collected by the flow
- cancel the collection job,
advanceUntilIdle()
, and then test that the callback was unregistered (I'm not a Mockk expert, but there should be something to check thatremoveListener
was called)
Note: maybe I'm old school but if your CallbackApi
is an interface (it's a class in your example, but I'm not sure to which extent it reflects the reality), I'd rather implement a mock manually using a channel to simulate events and assert expectations. I find it easier to reason about and to debug. Here is an example of what I mean
QUESTION
First, the question: is there a way to choose the platform (e.g. x86_64, AMD64, ARM64) for a GitHub Codespace?
Here's what I've found so far:
Attempt 1 (not working):
From within GitHub.com, you can choose the "machine" for a Codespace, but the only options are RAM and disk size.
Attempt 2 (EDIT: not working): devcontainer.json
When you create a Codespace, you can specify options by creating a top-level .devcontainer
folder with two files: devcontainer.json
and Dockerfile
Here you can customize runtimes, installed packages, etc., but the docs don't say anything about determining architecture...
...however, the VSCode docs for devcontainer.json
has a runArgs
option, which "accepts Docker CLI arguments"...
and the Docker CLI docs on --platform say you should be able to pass --platform linux/amd64
or --platform linux/arm64
, but...
When I tried this, the Codespace would just hang, never finishing building.
Attempt 3 (in progress): specify in Dockerfile
This route seems the most promising, but it's all new to me (containerization, codespaces, docker). It's possible that Attempts 2 and 3 work in conjunction with one another. At this point, though, there are too many new moving pieces, and I need outside help.
- Does GitHub Codespaces support this?
- Would you pass it in the Dockerfile or devcontainer.json? How?
- How would you verify this, anyway? [Solved:
dpkg --print-architecture
oruname -a
] - For Windows, presumably you'd need a license (I didn't see anything on GitHub about pre-licensed codespaces) -- but that might be out of scope for the question.
References:
https://code.visualstudio.com/docs/remote/devcontainerjson-reference
https://docs.docker.com/engine/reference/commandline/run/
https://docs.docker.com/engine/reference/builder/
https://docs.docker.com/desktop/multi-arch/
https://docs.docker.com/buildx/working-with-buildx/
ANSWER
Answered 2021-Dec-17 at 21:44EDIT: December 2021
I received a response from GitHub support:
The VM hosts for Codespaces are only x86_64 and we do not offer any ARM64 machines.
So for now, setting the platform does nothing, or fails.
But if they end up supporting multiple platforms, you should be able to (in Dockerfile)
RUN --platform=arm64|amd64|x86-64 [image-name]
,
Which is working for me in the non-cloud version of Docker.
Original answer:
I may have answered my own question
In Dockerfile
:
I had RUN alpine
changed to
RUN --platform=linux/amd64 alpine
or
RUN --platform=linux/x86-64 alpine
checked at the command line with
uname -a
to print the architecture.
Still verifying, but seems promising. [EDIT: Nope]
So, despite the above, I can only get GitHub codespaces to run x86-64. Nevertheless, the above syntax seems correct.
A clue:
In the logs that appear while the codespace is building, I saw target OS: x86
Maybe GitHub just doesn't support other architectures yet. Still investigating.
QUESTION
I have two pieces of recursive code, intending to recursively print out half of the array until we get to arrays of array length 1. The code without variable assignment runs infinitely while the code with variable assignment behaves as expected.
Any clues why this is the case?
Runs infinitely, CAREFUL
...ANSWER
Answered 2021-Sep-01 at 13:49Because it lacks var
, let
and const
, halfway
has global scope, as if you wrote window.halfway
. As a result, all recursive calls modify and use the same single variable.
In the 1st function the value is changed in the first recursive call before it can be used in the second recursive call. In my testing this actually led to a kind of Stack Overflow error (or rather a Maximum call stack size error), very appropriate for this site :-).
In the 2nd function the value is used twice before the recursive calls start, and then it gets modified by both after each other.
Issue solved by using const
:
QUESTION
My goal is to clean up my code so that I can more easily make dialog trees without constant copied pieces that don't have to be there. I can do it cleanly in python, but discord.py seems to have different requirements. Here is a sample of my current, very redundant code:
...ANSWER
Answered 2021-Aug-16 at 15:08di = {'hallucinated': {
1: {
'Text': [
"It sounds like you may be hallucinating, would you like help with trying to disprove it?"
],
'Options': {'yes': 2, 'no': 3}
},
2: {
'Text': [
"Is it auditory, visual, or tactile?"
],
'Options': {
"auditory": 4,
"visual": 5,
"tactile": 6
}
}
}}
# Modified the dictionary a little bit, so we can get the option values directly, and the starter keywords.
def make_check(options, message):
def predicate(msg):
return msg.author == message.author and msg.channel == message.channel and msg.content.lower() in options
return predicate
# I noticed the check function in your code was repetitive, we use higher order functions to solve this
async def response(dialogues, number, message, client):
await message.channel.send(dialogues[number]['Text'])
options = [x[0] for x in dialogues[number]['Options']]
if options:
msg = await client.wait_for("message", check=make_check(options, message), timeout=30.0)
return await response(dialogues, dialogues[number]['Options'][msg], message, client)
else:
pass
# end dialogues
# Use recursion to remove redundant code, we navigate through the dialogues with the numbers provided
async def on_message(message):
# basic on_message for example
starters = ['hallucinated']
initial = [x for x in starters if x in message.content.lower()]
if initial:
initial_opening_conversation = initial[0]
await response(di.get(initial_opening_conversation), 1, message, client)
QUESTION
Consider two semantically equivalent pieces of code:
...ANSWER
Answered 2021-Aug-07 at 10:05The answer can be found on docs.rust-lang.org
. Specifically refer to if let
article, where it's said that:
An
if let
expression is equivalent to amatch
expression as follows:
if let PATS = EXPR { /* body */ } else { /*else */ }
is equivalent to
match EXPR { PATS => { /* body */ }, _ => { /* else */ } }
After heading to the match
article, it's said that
A match behaves differently depending on whether or not the scrutinee expression is a place expression or value expression.
If the scrutinee expression is a value expression, it is first evaluated into a temporary location, and the resulting value is sequentially compared to the patterns in the arms until a match is found. The first arm with a matching pattern is chosen as the branch target of the match, any variables bound by the pattern are assigned to local variables in the arm's block, and control enters the block.
... more about place expressions
In your case the scrutinee *x.lock().unwrap()
is a value expression, thus the guard's lifetime is the same as the lifetime of your main branch. Thus you get a deadlock, trying to .lock()
mutex again after it's already locked by your own if let
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install pieces
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page