spring-integration-aws | Spring Integration channels adapters for receiving | Messaging library
kandi X-RAY | spring-integration-aws Summary
kandi X-RAY | spring-integration-aws Summary
A Java library built on top of the Spring Integration Framework to integrate with Amazon Simple Notification Service (SNS) and Amazon Simple Queue Service (SQS). These services are part of the Amazon Web Services (AWS).
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Deserialize a message from JSON
- Copy the given JSON object into the message builder
- Helper method to set message properties
- Parse gateway
- Method builder
- Parses SQS executor - proxy
- Initializes the properties for this service
- Adds a SNS publish policy on a topic
- Receive message from SQS queue
- Retrieves a single message from SQS
- Initialize polling for messages
- Handles incoming request
- Starts the thread loop
- This generates a hashCode for the account
- Initializes the required properties
- Parse the channel adapter
- Serialize a message into a string
- Stop message loop
- Parses the consumer
- Sends message to SQS
- Checks if this permission equals another object
- Parses outbound consumer
- Parse gateway handler
- Build the bean definition
- Parse outbound channel adapter
- Starts the WebSocket server
spring-integration-aws Key Features
spring-integration-aws Examples and Code Snippets
Community Discussions
Trending Discussions on spring-integration-aws
QUESTION
this is a follow-up question to Spring Integration AWS RabbitMQ Kinesis
I have the following configuration. I am noticing that when I send a message to the input channel named kinesisSendChannel for the first time, the aggregator and release strategy is getting invoked and messages are sent to Kinesis Streams. I put debug breakpoints at different places and could verify this behavior. But when I again publish messages to the same input channel the release strategy and the outbound processor are not getting invoked and messages are not sent to the Kinesis. I am not sure why the aggregator flow is getting invoked only the first time and not for subsequent messages. For testing purpose , the TimeoutCountSequenceSizeReleaseStrategy is set with count as 1 & time as 60 seconds. There is no specific MessageStore used. Could you help identify the issue?
...ANSWER
Answered 2021-Oct-12 at 13:10I believe the problem is here handler.setCorrelationStrategy(new ExpressionEvaluatingCorrelationStrategy("headers['foo']"));
. All your messages come with the same foo
header. So, all of them form the same message group. As long as you release group and don’t remove it, all the new messages are going to be discarded.
Please, revise aggregator documentation to make yourself familiar with all the possible behavior : https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#aggregator
QUESTION
I have a requirement to consume messages from rabbitMQ, do some processing and finally publish messages to the Kinesis Data stream. We are already using Spring Boot, Spring Integration Core & Spring Integration AQMP 5.5.1 Integration Flows for consuming messages from RabbitMQ. We are not using Spring Cloud Stream for any of our projects.
What spring library do you suggest for the use case to publish messages to the Kinesis data stream? After going through the Spring docs, I see a couple of options available. Could you please advise which is the best one to pursue?
- spring-cloud-stream-binder-aws-kinesis
- spring-integration-aws
ANSWER
Answered 2021-Sep-16 at 16:50As long as you are not concerned about Spring Cloud Stream, you should not bring that AWS Kinesis Binder for Spring Cloud Stream dependency into your project. Just because it is not going to work without Spring Cloud Stream features in your project.
Since your application is really a Spring Integration one, you definitely need to use that Spring Integration for AWS dependency. It comes with a KinesisMessageHandler
implementation to produce records into Kinesis stream.
See its docs for more info: https://github.com/spring-projects/spring-integration-aws#outbound-channel-adapter-3. Such a handler should be declared as a bean and can be used in the .handle()
endpoint of an IntegrationFlow
definition. See docs about existing handlers and missed Java DSL factories: https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-protocol-adapters
UPDATE
How to use Kinesis MH in Java DSL:
QUESTION
I am new to spring integration and currently stuck on unit testing my integration flow. My flow looks something like this.
- Recieve some data from TCP channel adapter in XML format.
- Convert it to JSON.
- Send JSON message to amazon sqs queue.
and XML file is :
...ANSWER
Answered 2021-Sep-13 at 14:13See Spring Integration testing support documentation: https://docs.spring.io/spring-integration/docs/current/reference/html/testing.html#testing. The framework provides for us a MockIntegrationContext
via @SpringIntegrationTest
marker on the Spring JUnit test class. The MockIntegration
factory lets us to create respective mocks and stub their handling logic. Then you can substitute endpoint beans with your mocks and so.
QUESTION
I have two applications - the first produces messages using spring-cloud-stream/function with the AWS Kinesis Binder, the second is an application that builds off of spring integration to consume messages. Communicating between the two is not a problem - I can send a message from "stream" and handle it easily in "integration".
When I want to send a custom header, then there is an issue. The header arrives at the consumer as an embedded header using the "New" format (Has an 0xff at the beginning, etc.) - See AbstractMessageChannelBinder#serializeAndEmbedHeadersIfApplicable in spring-cloud-stream.
However, the KinesisMessageDrivenChannelAdapter (spring-integration-aws) does not seem to understand the "new" embedded header form. It uses EmbeddedJsonHeadersMessageMapper (See #toMessage) which cannot "decode" the message. It throws a com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'ÿ': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
because of the additional information included in the embedded header (0xff and so on).
I need to send the header across the wire (the header is used to route on the other side), so it's not an option to "turn off" headers on the producer. I don't see a way to use the "old" embedded headers.
I'd like to use spring-cloud-stream/function on the producer side - it's awesome. I wish I could redo the consumer, but...
I could write my own embedded header mapper that understands the new format (use EmbeddedHeaderUtils), and wire it into the KinesisMessageDrivenChannelAdapter.
Given the close relationship between spring-cloud-stream and spring-integration, I must be doing something wrong. Does Spring Integration have an OutboundMessageMapper that understands the new embedded form?
Or is there a way to coerce spring cloud stream to use a different embedding strategy?
I could use Spring Integration on the producer side. (sad face).
Any thoughts? Thanks in advance.
...ANSWER
Answered 2021-May-28 at 17:10understands the new format
It's not a "new" format, it's a format that Spring Cloud Stream created, originally for Kafka, which only added header support in 0.11.
I could write my own embedded header mapper that understands the new format (use
EmbeddedHeaderUtils
), and wire it into theKinesisMessageDrivenChannelAdapter
.
I suggest you do that, and consider contributing it to the core Spring Integration Project alongside the EmbeddedJsonHeadersMessageMapper
so that it can be used with all technologies that don't support headers natively.
QUESTION
I have int-aws:sqs-message-driven-channel-adapter
on which if I set the errorChannel
, the downstream exceptions go there.
However, when I don't set an errorChannel
, the exception does not get logged. It does not go to the errorChannel which is expected. Is there a way, that such exceptions at least get logged? Is there a default errorlogger
which can simply log such errors?
UPDATE
Posting XML
and DSL
config as per the comments. The error
is simulated in the persistence layer by setting null
for a @NotBlank
field on the ServiceObject
.
ANSWER
Answered 2021-Apr-21 at 17:42The SqsMessageDrivenChannelAdapter
is fully based on the SimpleMessageListenerContainerFactory
from Spring Cloud AWS and that one just delegates to a listener we provide. Looking to the code there is just no any error handling. So, the best way to deal with it at the moment to explicitly set an error-channel="errorChannel"
and it is going to be logged via default logger subscribed to that global errorChannel
.
And yes: it is not expected to go to the errorChannel
by default. I'm not sure that there is such an official claim in our docs. Probably better to think about it as "no error channel by default", so it is up to underlying protocol client to handle thrown errors. Since there is no one there, then we don't have choice unless set error channel explicitly.
QUESTION
UPDATE: There is bug in spring-integration-aws-2.3.4
I am integrating SFTP (SftpStreamingMessageSource) as source with S3 as destination. I have similar Spring Integration configuration:
...ANSWER
Answered 2021-Jan-25 at 22:10The S3MessageHandler
does this:
QUESTION
I'm following the docs: Spring Batch Integration combining with the Integration AWS for pooling the AWS S3.
But the batch execution per each file is not working in some situations.
The AWS S3 Pooling is working correctly, so when I put a new file or when I started the application and there's files in the bucket the application sync with the local directory:
...ANSWER
Answered 2021-Jan-12 at 17:12The JobLaunchingGateway
indeed expects from us only JobLaunchRequest
as a payload.
Since you have that @InboundChannelAdapter(value = IN_CHANNEL_NAME, poller = @Poller(fixedDelay = "30"))
on the S3InboundFileSynchronizingMessageSource
bean definition, it is really wrong to have then @ServiceActivator(inputChannel = IN_CHANNEL_NAME
for that JobLaunchingGateway
without FileMessageToJobRequest
transformer in between.
Your integrationFlow
looks OK for me, but then you really need to remove that @InboundChannelAdapter
from the S3InboundFileSynchronizingMessageSource
bean and fully rely on the c.poller()
configuration.
Another way is to leave that @InboundChannelAdapter
, but then start the IntegrationFlow
from the IN_CHANNEL_NAME
not a MessageSource
.
Since you have several poller against the same S3 source, plus both of then are based on the same local directory, it is not a surprise to see so many unexpected situations.
QUESTION
I've created an on demand ChannelAdapter, AsyncTaskExecutor and a Channel for every queue registered on the application. I noticed that when the number of maxPoolSize
of the AsyncTaskExecutor is equal to one, the messages are not being processed. This is how the AsyncTaskExecutor bean is created.
ANSWER
Answered 2020-Sep-11 at 14:45It is always bad practice to to provide a thread pool just with one thread to some manageable component. You may not know what that component is going to do with your thread pool and it is really could be a fact that your single thread is taken by some long-living task internally and all new tasks are just going to stall in the queue waiting for that single thread to be free, which is might not going to happen.
In fact that is really what we have with the AsynchronousMessageListener
from Spring Cloud AWS which is used by the mentioned SqsMessageDrivenChannelAdapter
:
QUESTION
I want to implement spring-integration-aws to send and receive messages with SQS. I am looking at localstack and would like to know the recommendation of the spring team.
Which tool/api should I use for local setup of spring integration flows for SQS inbound and outbound adapters?
Also, will there be examples of AWS in spring-integration-samples in future? I am looking for an example with xml config that reads the aws config from credentials and send and receive messages via outbound adapters.
...ANSWER
Answered 2020-Aug-17 at 13:55Not sure what recommendation you expect from us, but I see an answer in your own question - Localstack
: https://github.com/localstack/localstack.
In the project test we indeed use this tool over a docker container:
We don't have such a test against SQS, but the configuration technique is similar.
I recall I heard that Testcontainers project can be used for testing AWS services locally as well: https://www.testcontainers.org/modules/localstack/
We don't have resources to write samples for this project.
QUESTION
When the application stops, kinesis binder tries to unlock dynamoDB and throws unlock failed exception.
I followed this original post with the similar issue and updated spring-integration-aws version to v2.3.1.RELEASE. But still seeing the same error on application shut down.
...ANSWER
Answered 2020-May-26 at 19:38If you confirm me that you don't create your own DynamoDbLockRegistry
bean, then I see what need to be corrected.
Nevertheless this should not be critical error in the end of application lifecycle: you have stopped it anyway and all the unlocked lock because of that error are going to be released next time when leaseDuration
expires.
UPDATE
The fix is here: https://github.com/spring-projects/spring-integration-aws/commit/bc4a1c7c5975555fb5237642b8b97d8633f0f6cb
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install spring-integration-aws
You can use spring-integration-aws like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the spring-integration-aws component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page