sqs | SQS client | AWS library
kandi X-RAY | sqs Summary
kandi X-RAY | sqs Summary
An API client for SQS.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Returns the endpoint metadata .
- Get queue URLs .
- Builds the request body .
- Create a new Request
- Populates the result message list .
- Populates the result set of send_message_result_Entry objects .
- Populates a BatchResultEntry objects from a result set .
- Extract the state from the response .
- Populates the result set of DeleteMessageBatchBatchResultEntry objects .
- Populate the result queue attribute map .
sqs Key Features
sqs Examples and Code Snippets
Community Discussions
Trending Discussions on sqs
QUESTION
I am having a lot of issues handling concurrent runs of a StateMachine (Step Function) that does have a GlueJob task in it.
The state machine is initiated by a Lambda that gets trigger by a FIFO SQS queue.
The lambda gets the message, checks how many of state machine instances are running and if this number is below the GlueJob concurrent runs threshold, it starts the State Machine.
The problem I am having is that this check fails most of the time. The state machine starts although there is not enough concurrency available for my GlueJob. Obviously, the message the SQS queue passes to lambda gets processed, so if the state machine fails for this reason, that message is gone forever (unless I catch the exception and send back a new message to the queue).
I believe this behavior is due to the speed messages gets processed by my lambda (although it's a FIFO queue, so 1 message at a time), and the fact that my checker cannot keep up.
I have implemented some time.sleep() here and there to see if things get better, but no substantial improvement.
I would like to ask you if you have ever had issues like this one and how you got them programmatically solved.
Thanks in advance!
This is my checker:
...ANSWER
Answered 2022-Jan-22 at 14:39You are going to run into problems with this approach because the call to start a new flow may not immediately cause the list_executions()
to show a new number. There may be some seconds between requesting that a new workflow start, and the workflow actually starting. As far as I'm aware there are no strong consistency guarantees for the list_executions()
API call.
You need something that is strongly consistent, and DynamoDB atomic counters is a great solution for this problem. Amazon published a blog post detailing the use of DynamoDB for this exact scenario. The gist is that you would attempt to increment an atomic counter in DynamoDB, with a limit
expression that causes the increment to fail if it would cause the counter to go above a certain value. Catching that failure/exception is how your Lambda function knows to send the message back to the queue. Then at the end of the workflow you call another Lambda function to decrement the counter.
QUESTION
I have a SQS queue which fills up with messages throughout the day and I want to start processing all the messages at a specific time. The scenario would be:
- Between 9AM and 5PM the queue would receive messages
- At 6PM the messages should be processed by a lambda
I was thinking of:
- Enabler: Lambda A which will be executed using a CloudWatch Event Bridge ruleat 6PM. This lambda would create a SQS trigger for Lambda C
- Disabler: Lambda B which will be executed using a CloudWatch Event Bridge rule at 8PM . This lambda would remove the SQS trigger of Lambda C
- Executer: Lambda C which process the messages in the queue
Is this the best way to do this?
...ANSWER
Answered 2022-Jan-25 at 14:36I would aim for the process which requires the least complexity / smallest changes to your Lambda. You could use the AWS SDK to enable / disable your Lambda's subscription, rather than actually deleting and recreating it. See this question on how to do so and specifically the enabled
parameter in the updateEventSourceMapping()
method in the docs it links to:
Enabled — (Boolean)
When true, the event source mapping is active. When false, Lambda pauses polling and invocation.
Default: True
The advantage is that the only thing you're changing is the enabled
flag - everything else (the SQS-Lambda subscription, if you will) is unchanged.
This approach still has the danger that if the enabler/disabler lambda(s) fail, your processing will not occur during your target hours. Particularly, I'm not personally super confident in the success rate of AWS's self-mutating commands - this might just be my bias, but it definitely leans toward "infra changes tend to fail more often than regular AWS logic".
It's worth considering whether you really need this implementation, or whether the time-based aggregation can be done on the results (e.g., let this Lambda processing run on events as they come in and write the output to some holding pen, then at Xpm when you trust all events have come in, move the results for today from the holding pen into your main output).
This approach may be safer, in that a failed trigger on the "moving" step could be easier / faster to recover from than a failed trigger on the above "process all my data now" step, and it would not depend on changing the Lambda definition.
QUESTION
I'm using AWS Eventbridge and I have the exact same rule on my default bus as on a custom bus. The target for both is an SQS queue. When I push an event I can see a message on my queue which is the target of the rule of my default bus.
I don't see anything on the queue of the rule of my custom bus. Also the metrics don't show an invocation. What am I doint wrong? I've created a custom bus.
I tried both without any policy as with the following policy:
...ANSWER
Answered 2022-Jan-24 at 11:31Your custom bus will not receive any "aws.ssm"
events. All aws.*
are going to default bus only. The custom bus can only receive custom events from your application, e.g.:
QUESTION
We have several lambda functions, and I've automated code deployment using the gradle-aws-plugin-reboot plugin.
It works great on all but one lambda functions. On that particular one, I'm getting this error:
...ANSWER
Answered 2021-Dec-09 at 10:42I figured it out. You better not hold anything in your mouth, because this is hilarious!
Basically being all out of options, I locked on to the last discernible difference between this deployment and the ones that worked: The filesize of the jar being deployed. The one that failed was by far the smallest. So I bloated it up by some 60% to make it comparable to everything else... and that fixed it!
This sounds preposterous. Here's my hypothesis on what's going on: If the upload takes too little time, the lambda somehow needs longer to change its state. I'm not sure why that would be, you'd expect the state to change when things are done, not to take longer if things are done faster, right? Maybe there's a minimum time for the state to remain? I wouldn't know. There's one thing to support this hypothesis, though: The deployment from my local computer always worked. That upload would naturally take longer than jenkins needs from inside the aws vpc. So this hypothesis, as ludicrous as it sounds, fits all the facts that I have on hand.
Maybe somebody with a better understanding of the lambda-internal mechanisms can add a comment to this explaining how this can happen...
QUESTION
I have a Snowpipe created by user A. I would then like a separate user B to check its status using the Snowflake rest API endpoint /insertReport
.
- User A is an
ACCOUNTADMIN
- User A created the Snowpipe.
- User A ran the following for user B's default role:
ANSWER
Answered 2022-Jan-11 at 20:52I have checked with a Snowflake representative - irrespective of MONITOR
and OPERATE
privileges, if you want to use /insertReport
, you must have OWNERSHIP
of the pipe.
The permissions and features found here https://docs.snowflake.com/en/release-notes/2021-03.html#snowpipe-support-for-non-pipe-owners-to-manage-pipes-preview do not mention /insertReport
at all. You can let a sub-role start/pause/load/read/check (via SQL) a pipe, but there are no privileges that let non-owners use /insertReport
.
QUESTION
I have daily scheduled task that triggers around 10k lambda functions for 10k records that I need to maintain. I'm using SQS to queue all those messages and I want to spread execution over couple of hours. So I set up reserved concurrency to only 3 concurrent invocations. But still when that scheduled task hits concurrent invocations of that lambda functions goes over 3. Any advice on how to do it? When I check lambda configuration it shows that reserved concurrency is 3. But on monitoring concurrent invocations shows way over 3.
...ANSWER
Answered 2021-Dec-21 at 11:38It's always tricky to use SQS with Lambda (concurrency limit configured) because in short, is not gonna work, instead, you will get some throttling records because the lambda can't process messages limited by the concurrency.
You can check this article which explains the why and a workaround solution : https://zaccharles.medium.com/lambda-concurrency-limits-and-sqs-triggers-dont-mix-well-sometimes-eb23d90122e0
check also this AWS documentation for further information about this subject https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#events-sqs-queueconfig
QUESTION
I have a code that calls a setInterval in the try block and I clear the interval in the end of the code, but if my code catch and exception, the setInterval function keeps running forever. How can I clear this if the sqsInterval that I created in the try block doesn't exist in the catch?
...ANSWER
Answered 2021-Dec-13 at 15:25const sqsInterval
is scoped inside try{ }
. Declare it outside with let
QUESTION
I have a lambda function with SQS as its trigger. when lambda executes, either it throws an error or not. it will put the job back in the queue and creates a loop and you know about the AWS bill for sure :)
should I return something in lambda function to let SQS know that I got the message(done the job)? how should I ack the message? as far as I know we don't have ack and nack in SQS.
Is there any option in the SQS configuration to only retry N time if any job fails?
ANSWER
Answered 2021-Nov-20 at 13:41For standard uses cases you do not have to actively manage success-failure communication between lambda and SQS. If the lambda returns without error within the timeout period, SQS will know the message was successfully processed. If the function returns an error, then SQS will retry a configurable number of times and finally direct still-failing messages to a Dead Letter Queue (if configured).
Docs: Amazon SQS supports dead-letter queues, which other queues (source queues) can target for messages that can't be processed (consumed) successfully.
Important: Add your DLQ to the SQS queue, not the Lambda. Lambda DLQs are a way to handle errors for async (event-driven) invocation.
QUESTION
I have downloaded the street abbreviations from USPS. Here is the data:
...ANSWER
Answered 2021-Nov-03 at 10:26Here is the benchmarking for the existing to OP's question (borrow test data from @Marek Fiołka but with n <- 10000
)
QUESTION
I've been working on a project which so far has just involved building some cloud infrastructure, and now I'm trying to add a CLI to simplify running some AWS Lambdas. Unfortunately both the sdist and wheel packages built using poetry build
don't seem to include the dependencies, so I have to manually pip install
all of them to run the command. Basically I
- run
poetry build
in the project, cd "$(mktemp --directory)"
,python -m venv .venv
,. .venv/bin/activate
,pip install /path/to/result/of/poetry/build/above
, and then- run the new .venv/bin/ executable.
At this point the executable fails, because pip
did not install any of the package dependencies. If I pip show PACKAGE
the Requires
line is empty.
The Poetry manual doesn't seem to specify how to link dependencies to the built package, so what do I have to do instead?
I am using some optional dependencies, could that be interfering with the build process? To be clear, even non-optional dependencies do not show up in the package dependencies.
pyproject.toml:
...ANSWER
Answered 2021-Nov-04 at 02:15This appears to be a bug in Poetry. Or at least it's not clear from the documentation what the expected behavior would be in a case such as yours.
In your pyproject.toml
, you specify two dependencies as required in this section:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install sqs
PHP requires the Visual C runtime (CRT). The Microsoft Visual C++ Redistributable for Visual Studio 2019 is suitable for all these PHP versions, see visualstudio.microsoft.com. You MUST download the x86 CRT for PHP x86 builds and the x64 CRT for PHP x64 builds. The CRT installer supports the /quiet and /norestart command-line switches, so you can also script it.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page