retryable | Python Retry Decorator | Functional Programming library
kandi X-RAY | retryable Summary
kandi X-RAY | retryable Summary
from retryable import retry. If all retry attempts fail, the wrapper will raise the exception caught during the last attempt. It will attach the `retry_count` attribute to the exception before it throws it. The caller can then inspect the exception to see how many retries were attempted. If there are no retries attempted and the function raises, the original exception raised by the function will be passed up to the caller to be handled.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Retry decorator .
- Finalize options .
- Run develop .
retryable Key Features
retryable Examples and Code Snippets
Community Discussions
Trending Discussions on retryable
QUESTION
Spring + Apache Kafka noob here. I'm wondering if its advisable to run a single Spring Boot application that handles both producing messages as well as consuming messages.
A lot of the applications I've seen using Kafka lately usually have one separate application send/emit the message to a Kafka topic, and another one that consumes/processes the message from that topic. For larger applications, I can see a case for separate producer and consumer applications, but what about smaller ones?
For example: I'm a simple app that processes HTTP requests => send requests to a third party service, but to ensure retryability, I put the request on a Kafka queue with a service using the @Retryable
annotation?
And what other considerations might come into play since it would be on the Spring framework?
...ANSWER
Answered 2022-Apr-16 at 18:16Note: As your question states, what'll say is more of an advice based on my beliefs and experience rather than some absolute truth written in stone.
Your use case seems more like a proxy
than an actual application with business logic. You should make sure that making this an asynchronous service makes sense - maybe it's good enough to simply hold the connection until you get a response from the 3p, and let your client handle retries if you get an error - of course, you can also retry until some timeout.
This would avoid common asynchronous issues such as making your client need to poll
or have a webhook
in order to get a result, or making sure a record still makes sense to be processed after a lot of time has elapsed after an outage or a high consumer lag.
If your client doesn't care about the result as long as it gets done, and you don't expect high-throughput on either side, a single Spring Boot
application should be enough for handling both producer and consumer sides - while also keeping it simple.
If you do expect high throughput, I'd look into building a WebFlux based application with the reactor-kafka library - high throughput proxies are an excellent use case for reactive applications
.
Another option would be having a simple serverless function
that handles the http
requests and produces the records, and a standard Spring Boot
application to consume them.
TBH, I don't see a use case where having two full-fledged java applications to handle a proxy duty would pay off, unless maybe you have a really sound infrastructure to easily manage them that it doesn't make a difference having two applications instead of one and using more resources is not an issue.
Actually, if you expect really high traffic and a serverless function
wouldn't work, or maybe you want to stick to Java-based solutions, then you could have a simple WebFlux
-based application to handle the http
requests and send the messages, and a standard Spring Boot
or another WebFlux
application to handle consumption. This way you'd be able to scale up the former in order to accommodate the high traffic, and independently scale the later in correspondence with your performance requirements.
As for the retry part, if you stick to non-reactive Spring Kafka
applications, you might want to look into the non-blocking retries feature from Spring Kafka
. This will enable your consumer application to process other records while waiting to retry a failed one - the @Retryable
approach is deprecated in favor of DefaultErrorHandler and both will block consumption while waiting.
Note that with that you lose ordering guarantees, so use it only if the order the requests are processed is not important.
QUESTION
Apologies here in advance for this non-simplified use case.
During one of my data load processes, concurrent request transactions are used to fill MarkLogic.
Each concurrent thread does the following operations at a high level:
...ANSWER
Answered 2022-Apr-07 at 20:54When debugging, we do see concurrently when this transaction is being run, documents returned in the
cts:search
are locked for updates in other transactions. We are well aware of this possibility and are okay with it.
You may think that you are okay with it, but you are running into performance issues that are likely due to it, and are looking to avoid timeouts - so you probably aren't okay with it.
When you perform a search in an update transaction, all of the fragments will get a read-lock
. You can have multiple transactions all obtain read locks on the same URI without a problem. However, if one of those transactions then decides it wants to update one of those documents, it needs to promote it's shared read-lock
to an exclusive write-lock
. When that happens, all of those other transactions that had a read-lock
on that URI will get restarted. If they need access to that URI that has an exclusive write-lock
then they will have to wait until the transaction that has the write-lock
completes and lets go.
So, if you have a lot of competing transactions all performing searches with the same criteria and trying to snag the first item (or first set of items) from the search results, they can cause each other to keep restarting and/or waiting, which takes time. Adding more threads in an attempt to do more makes it even worse.
There are several strategies that you can use to avoid this lock contention.
Instead of cts:search()
to search and retrieve the documents, you could use cts:uris()
, and then before reading the doc with fn:doc()
(which would first obtain a read-lock
) before attempting to UPSERT (which would promote the read-lock
to a write-lock
), you could use xdmp:lock-for-update()
on the URI to obtain an exclusive write-lock
and then read the doc with fn:doc()
.
If you are trying to perform some sort of batch processing, using a tool such as CoRB to first query for the set of URIs to process (lock-free) in a read-only transaction, and then fire off lots of worker transactions to process each URI separately where it reads/locks the doc without any contention.
You could also separate the search and update work, using xdmp:invoke-function()
or xdmp:spawn-function()
so that the search is executed lock-free and the update work is isolated.
Some resources that describe locks and performance issues caused by lock contention:
- https://www.marklogic.com/blog/resolving-unresolvable-deadlocks/
- https://help.marklogic.com/Knowledgebase/Article/View/17/0/understanding-xdmp-deadlock
- https://help.marklogic.com/Knowledgebase/Article/View/understanding-locking-in-marklogic-using-examples
- https://help.marklogic.com/Knowledgebase/Article/View/strategies-to-ensure-if-locking-is-the-root-cause-for-performance-degradation-in-marklogic
QUESTION
I am trying to List all available savings plan in my account using
...ANSWER
Answered 2022-Mar-03 at 07:35According to the docs, the endpoint url is:
savingsplans.amazonaws.com
You can manually specify correct endpoint:
QUESTION
My team is writing a service that leverages the retryable topics mechanism offered by Spring Kafka (version 2.8.2). Here is a subset of the configuration:
...ANSWER
Answered 2022-Feb-17 at 22:05That is a good suggestion; it probably should be the default behavior (or at least optionally).
Please open a feature request on GitHub.
There is a, somewhat, related discussion here: https://github.com/spring-projects/spring-kafka/discussions/2101
QUESTION
I have an AWS lambda function in a VPC on AWS account A that has a peering connection with a VPC on AWS account B containing a DAX cluster. I'm getting the following error when trying to connect to the DAX cluster from my lambda.
...ANSWER
Answered 2022-Feb-11 at 21:03I was able to solve this issue with the help of an AWS rep. It turns out I needed a public and private subnet in my VPC containing the lambda. The lambda itself had to be in a private subnet with the public subnet containing a NAT gateway and an internet gateway. Instead of a single route table in the VPC, I needed separate route tables for the two subnets. The private one contains the peering connection route and VPC CIDR route like I mentioned in my question but also contains a route with destination 0.0.0.0/0
with the NAT gateway as the target. The public subnet route table contains the VPC CIDR route as well as a route with destination 0.0.0.0/0
with the internet gateway as the target.
QUESTION
Trying to install and configure AWS toolkit to Visual Studio Code.
Command Command palette->Create Credentials profile
brings two files :
ANSWER
Answered 2022-Feb-03 at 08:51You'll need to get the access and secret key from AWS and insert them in place of the XXXXXXX
placeholders.
You can get this information in the AWS Cloud -> IAM -> Access Management -> Users -> Select your user -> Security credentials -> Access Keys
You will find here the Access Key ID, but the Secret Key is only shown once when you are creating this item. You maybe have it stored somewhere, or you can create another Access Key pair and use that.
I have done this and I can connect to AWS Toolkit fine.
QUESTION
Im trying to write a simple run_transaction
function for the Rust MongoDB Driver
This function tries to execute a transaction through the mongo db client and retries the transaction if it encounters a retryable error
Here is a minimum reproducible example of the function.
...ANSWER
Answered 2022-Jan-27 at 19:43What you really need is something like:
QUESTION
I would like to create a @Bean
of a third party service like Keycloak (or any other) which may or may not be reachable at any given time. This object should retry all methods of the resulting Keycloak
bean.
I have tried the following:
...ANSWER
Answered 2022-Jan-05 at 15:17You have to annotate the Keycloak
with @Retryable
.
QUESTION
I am trying to delete all records which have the attribute "code" from a DynamoDB table. Here my params:
...ANSWER
Answered 2021-Nov-18 at 09:15According to the documentation:
ConditionalCheckFailedException Message: The conditional request failed.
You specified a condition that was evaluated to be false. For example, you might have tried to perform a conditional update on an item, but the actual value of the attribute did not match the expected value in the condition.
OK to retry? No
So the problem is with the ConditionExpression
, the code
attribute does not exist in some records or it exists with undefined
value. try to change attribute_exists
with attribute_type
:
QUESTION
We have some onsite servers that we deploy to with Azure DevOps release pipelines. We have been using these release pipelines for several months with no issues. Today, we started getting an authentication error while downloading the artifact for the project.
The nodes in the deployment group show online, the build pipeline is successful, yet it shows the error below when we try to deploy anything. We even tried completely reconfiguring the Azure agent on each server for one of our applications to ensure that some access tokens hadn't expired or something, but it still gives the error below. There is no elaboration on what the error is beyond just "Authentication required". Googling has been fruitless, as well. What could cause an authentication error at this "Download artifact" stage?
Ticket submitted to Microsoft, other than a suggestion to skip downloading the artifact (which didn't work) nothing has been suggested.
Update: The deployment works on my home PC and my local PC at work, so the issue seems to be limited to our web servers. Here is the log with System.Debug enabled. Doesn't seem to be anything helpful, but I may have missed something.
...ANSWER
Answered 2021-Nov-05 at 18:30It looks like the cause was embarrassingly simple. Some sort of policy or cleanup process removed dev.azure.com from the list of trusted sites from our servers. Once I added it back to this, it seems to work.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install retryable
You can use retryable like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page