linger | Busy-indicator for the terminal | Command Line Interface library
kandi X-RAY | linger Summary
kandi X-RAY | linger Summary
Busy-indicator for the terminal
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Send data to stdout
- decode a string
linger Key Features
linger Examples and Code Snippets
Community Discussions
Trending Discussions on linger
QUESTION
It's my first Kafka program.
From a kafka_2.13-3.1.0
instance, I created a Kafka topic poids_garmin_brut
and filled it with this csv
:
ANSWER
Answered 2022-Feb-15 at 14:36Following should work.
QUESTION
Some version information:
- Jboss 6.4
- Postgres 9.6
- mybatis-3 CDI
- Postgres Driver 42.2.20 JDBC 4
I'm having a problem that is causing pretty catastrophic behavior in my system. From my debugging I've been able to deduce that an idle transaction appears to be locking a table in my database, causing the application to freeze (certain locks aren't being released). I've been able to stop the freezing my setting timeouts in mybatis but I cannot figure out what is causing the idle transaction in the first place. The good news is that its always the same UPDATE statement that appears to be blocked. However, I can't narrow down what query/trans curring and I'm seeing behavior that I understand.
Here is the query that always seems to lock up (Some names were changed but this query normally works):
...ANSWER
Answered 2022-Feb-15 at 16:39So I discovered what the problem was. The issue really wasn't the database's fault or even the queries that were being used. It turns out that our system was using the same Transaction subsystem for both it our Data Source (Postgres Database) and our JMS messaging system. When a JMS message was sent, it created a transaction and every transactional based action that followed during the life cycle of that tread/transaction would be treated as part of that original transaction. Which includes all of our database calls.....
This explains why a query as simple as insert into a message log was touching all of our relations in the database. The debug queries only showed me the first query/statement sent to the database, not all of the others that were used during the life cycle of the JMS message. There were several ways to fix this but my team opted for the easiest which was preventing the Data Source from using the JBoss provided Transaction Manager.
QUESTION
I have followed https://noisysocks.com/2021/11/12/set-up-a-wordpress-development-environment-using-homebrew-on-macos/ tutorial to setup WordPress development environment using Homebrew on mac os 12. Before I’ve been using only MAMP.
The problem is at the final
You should now be able to browse to http://wp-build.test/wp-admin and log in. The username is admin and the password is password.
when I’m launching http://wp-build.test/wp-admin
...ANSWER
Answered 2022-Feb-08 at 07:24Is apache running?
You must also spoof your DNS to point to your local ip, so your machine does not ask internet dns servers for an ip, which they would not be able to find.
I'm assuming you're on mac, so edit /etc/hosts and add:
QUESTION
I've been getting an error that I've never seen before. I keep seeing this:
Run-time error '3048':
Cannot open any more databases.
Having Googled it, it seems this happens when there are very complicated forms that have lots of lists or combo boxes that have their sources as a table/query. However, I've not changed these forms for a while now and I'm all of a sudden seeing this. Plus, my forms really aren't that complicated, usually just a single list and maybe 1 or 2 combo boxes. I just started seeing this error yesterday (2/2/22)
Almost in all cases I'm accessing the tables by using this code:
...ANSWER
Answered 2022-Feb-03 at 21:07This is, sadly, a known current bug:
Access doesn't close properly. A remaining background process can only be terminated in task manager
There is no official info or remedy yet.
QUESTION
We have an http endpoint that receives some data and sends it to Kafka. We would like to immediately respond with an error if the broker is down, instead of retrying asynchronously. Is this possible at all?
What we are doing is starting the application, shutting down the broker and sending messages to see what happens. We are sending the messages using the blocking option described here.
...ANSWER
Answered 2022-Jan-11 at 16:17producer.send
puts data into an internal queue, which is only sent to the broker when the producer is flushed (which is the effect of calling .get()
.
If you need to detect a connection before calling .send
, then you need to actually make the connection beforehand, for example, using an AdminClient.describeCluster
method call
QUESTION
I am trying to kill a subprocess that expects a key press 'q' in the terminal in order to stop gracefully.
This executable has two modes of running (tried both):
- takes over the terminal with probably some sort of ncurses (considering it is Windows it is probably something else)
- just runs in the terminal as a regular command and waits for a key press
I have tried spawning the subprocess with subprocess.Popen(command_parts)
where command_parts
is a list with the executable and it's various flags.
I have added the following arguments to the Popen constructor in multiple combinations:
- no special flags
- with
creationflags=subprocess.DETACHED_PROCESS
- with
stdin=PIPE
I have tried sending to the stdin of the executable the following strings:
b"q"
b"q\n"
b"q\r\n"
I have tried communicating with the executable in the following ways:
subprocess_instance.communicate(input_stuff)
subprocess_instance.stdin.write(input_stuff); subprocess_instance.stdin.flush()
None of these attempts results in the executable gracefully shutting down, and just lingers forever as if nothing happened on the stdin.
Observations:
- the q keystroke works if simply running the executable from power shell
- the executable has to close gracefully otherwise it results in some undesired behaviour
- Python versions used: 3.8.*, 3.9.*
UPDATE:
I tried using a sample C program that waits for 'q':
...ANSWER
Answered 2021-Dec-15 at 20:37There are multiple ways in which a python script can communicate with a subprocess when it comes to keypresses.
- pywin32
- pynput
- pyautogui
- ctypes +
user32.dll
(credits to @john-hen -> inspired from https://stackoverflow.com/a/8117562/858565)
Package: https://pypi.org/project/pywin32/
QUESTION
We've been moving our applications from CircleCI to GitHub Actions in our company and we got stuck with a strange situation.
There has been no change to the project's code, but our kafka integration tests started to fail in GH Actions machines. Everything works fine in CircleCI and locally (MacOS and Fedora linux machines).
Both CircleCI and GH Actions machines are running Ubuntu (tested versions were 18.04 and 20.04). MacOS was not tested in GH Actions as it doesn't have Docker in it.
Here are the docker-compose
and workflow
files used by the build and integration tests:
- docker-compose.yml
ANSWER
Answered 2021-Nov-03 at 19:11We identified some test sequence dependency between the Kafka tests.
We updated our Gradle version to 7.3-rc-3
which has a more deterministic approach to test scanning. This update "solved" our problem while we prepare to fix the tests' dependencies.
QUESTION
Let's say I have a class TT
inheriting from a class T
and extending the behaviour of one of its methods, like so:
ANSWER
Answered 2021-Oct-04 at 23:32There's no straightforward non-hacky way. (And I don't even know if there is a hacky way).
What you propose violates the encapsulation principle. A class hides its nitty gritty dirty internals and only exposes a neat interface with promised behavior.
Inheritance is not a mechanism to violate this principle.
In your concrete example, the issue comes from a bad interface design of T
. If T
had a method compute_a()
that would return self.p + self.b
then in your inherited class you can of course call self.compute_a()
.
But only do this if a
is more than a mere internal implementation detail!
QUESTION
I am trying to write data into a Kafka topic after reading a hive table as below.
...ANSWER
Answered 2021-Sep-22 at 20:54The configurations spark.streaming.[...]
you are referring to belong to the Direct Streaming (aka Spark Streaming) and not to Structured Streaming.
In case you are unaware of the difference, I recommend to have a look at the separate programming guides:
- Structured Streaming: processing structured data streams with relation queries (using Datasets and DataFrames, newer API than DStreams)
- Spark Streaming: processing data streams using DStreams (old API)
Structured Streaming does not provide a backpressure mechanism. As you are consuming from Kafka you can use (as you are already doing) the option maxOffsetsPerTrigger
to set a limit on read messages on each trigger. This option is documented in the Structured Streaming and Kafka Integration Guide as:
"Rate limit on maximum number of offsets processed per trigger interval. The specified total number of offsets will be proportionally split across topicPartitions of different volume."
In case you are still interested in the title question
How is
spark.streaming.kafka.maxRatePerPartition
related tospark.streaming.backpressure.enabled
in case of spark streaming with Kafka?
This relation is explained in the documentation on Spark's Configuration:
"Enables or disables Spark Streaming's internal backpressure mechanism (since 1.5). This enables the Spark Streaming to control the receiving rate based on the current batch scheduling delays and processing times so that the system receives only as fast as the system can process. Internally, this dynamically sets the maximum receiving rate of receivers. This rate is upper bounded by the values
spark.streaming.receiver.maxRate
andspark.streaming.kafka.maxRatePerPartition
if they are set (see below)."
All details on the backpressure mechanism available in Spark Streaming (DStream, not Structured Streaming) are explained in the blog that you have already linked Enable Back Pressure To Make Your Spark Streaming Application Production Ready.
Typically, if you enable backpressure you would set spark.streaming.kafka.maxRatePerPartition
to be 150% ~ 200% of the optimal estimated rate.
The exact calculation of the PID controller can be found in the code within the class PIDRateEstimator.
Backpressure Example with Spark StreamingAs you asked for an example, here is one that I have done in one of my productive applications:
Set-Up- Kafka topic has 16 partitions
- Spark runs with 16 worker cores, so each partitions can be consumed in parallel
- Using Spark Streaming (not Structured Streaming)
- Batch interval is 10 seconds
spark.streaming.backpressure.enabled
set to truespark.streaming.kafka.maxRatePerPartition
set to 10000spark.streaming.backpressure.pid.minRate
kept at default value of 100- The job can handle around 5000 messages per second per partition
- Kafka topic contains multiple millions of messages in each partitions before starting the streaming job
- In the very first batch the streaming job fetches 16000 (= 10 seconds * 16 partitions * 100 pid.minRate) messages.
- The job is processing these 16000 message quite fast, so the PID controller estimates an optimal rate of something larger than the masRatePerPartition of 10000.
- Therefore, in the second batch, the streaming job fetches 16000 (= 10 seconds * 16 partitions * 10000 maxRatePerPartition) messages.
- Now, it takes around 22 seconds for the second batch to finish
- Because our batch interval was set to 10 seconds, after 10 seconds the streaming job schedules already the third micro-batch with again 1600000. The reason is that the PID controller can only use performance information from finished micro-batches.
- Only in the sixth or seventh micro-batch the PID controller finds the optimal processing rate of around 5000 messages per second per partition.
QUESTION
I am very new to kafka and trying to write data into a topic and read from the same topic (We are acting as a source team to ingest data for now. Hence we are doing both operations of Write to Kafk topic for and consume from the same topic). I wrote the below code on spark-shell to write data into a Kafka topic.
...ANSWER
Answered 2021-Sep-09 at 07:43This is quite a broad topic with questions that require some thorough answers. Anyway, most importantly:
- in general, Kafka scales with the number of partitions in a topic
- Spark scales with the number of worker nodes and available cores/slots
- each partition of the Kafka topic can only be consumed by a single Spark task (parallelsim then depends on the number of Spark wcores)
- if you have multiple Spark workers but only one Kafka topic partition, only one core can consume the data
- Likewise, if you have multiple Kafka topic partitions but only one worker node with a single core, the "parallelism" is 1
- remember that a formular usually represents a theory which for simplicity leaves out details. The formular you have cited is a good starting point but in the end it depends on your environment such as: requirements for latency or theoughput, network bandwith/traffic, available hardware, costs etc. That being said, only you can do testing for optimisations.
As a side note, when writing to Kafka from Spark Structured Streaming, if your Dataframe contains the column "partition" it will be used to send the record to the corresponding partition (starting from 0). You can also have the column "topic" in the dataframe which allows you to send the record to a certain topic.
Spark Structured Streaming will send each record individually to Kafka.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install linger
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page