downstream | Straightforward way to implement communication | Application Framework library
kandi X-RAY | downstream Summary
kandi X-RAY | downstream Summary
This gem provides a straightforward way to implement communication between Rails Engines using the Publish-Subscribe pattern. The gem allows decreasing decoupling engines with events. An event is a recorded object in the system that reflects an action that the engine performs, and the params that lead to its creation. The gem inspired by active_event_store, and initially based on its codebase. Having said that, it does not store in a database all happened events which ensures simplicity and performance.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Checks whether the event matches the given event .
- Look up a new publisher instance .
- serialize to_id
- Set the expected number
- Generate a message for the given expression .
- Sets the pubkey for this publisher .
- Raises an exception if the event is not defined .
- Create a new message
- Specifies the given expression to the given number .
- Set the attributes with the given attributes
downstream Key Features
downstream Examples and Code Snippets
# some/engine.rb
initializer "my_engine.subscribe_to_events" do
# To make sure event store is initialized use load hook
# `store` == `Downstream`
ActiveSupport.on_load "downstream-events" do |store|
store.subscribe MyEventHandler, to: Prof
class ProfileCreated < Downstream::Event
# (optional)
# Event identifier is used for streaming events to subscribers.
# By default, identifier is equal to underscored class name.
# You don't need to specify identifier manually, only for ba
it "is subscribed to some event" do
allow(MySubscriberService).to receive(:call)
event = MyEvent.new(some: "data")
Downstream.publish event
expect(MySubscriberService).to have_received(:call).with(event)
end
expect { subject }.to have_pub
Community Discussions
Trending Discussions on downstream
QUESTION
I really searched for this one, because I am almost certain some variation has been asked before but I couldn't put in the correct terms into Google to get a result that matches what I am trying to do. Generally seems like people are looking for the total combinations without constraints.
I am trying to do the following:
Given a list like this:
[1, 1, 2, 2, 3, 3]
group it into as many groups of [1, 2, 3]
as possible
So
[1, 1, 2, 2, 3, 3]
-> [[1, 2, 3], [1, 2, 3]]
[1, 1, 2, 3, 3]
-> [[1, 2, 3], [1, 3]]
[1, 1, 3, 3, 5]
-> [[1, 3, 5], [1, 3]]
[1, 4, 4, 7]
-> [[1, 4, 7], [4]]
Notes:
Input will always be sorted, but the values of these numbers is not known, so it will need to work in general sense.
The idea is I have objects with certain attributes that need to be grouped together to create a different object, but sometimes I am given repeats (and potentially incomplete repeats) -- ie, I used to think that the attributes of my objects will always just be
[1, 2, 3]
but turns out sometimes I can get[1, 1, 2, 2, 3, 3]
and I need a way to break that into two[1, 2, 3]
lists to create an intermediate object downstream.
ANSWER
Answered 2022-Apr-01 at 06:01You can use zip_longest
and groupby
from itertools
:
QUESTION
I have a standalone Blazor WASM site (client), a separate .NET 6 web API (server) with protected endpoints and I'm trying to call MS Graph from the API.
I've read just about every article I could find on the configuration required to make this work and I'm stuck with the incremental consent failing. I get the following error when trying to access a server API which uses MS Graph:
Configuration...Error acquiring a token for a downstream web API - MsalUiRequiredException message is: AADSTS65001: The user or administrator has not consented to use the application with ID '[redacted]' named '[redacted]'. Send an interactive authorization request for this user and resource.
Created AAD app for Web API (server), added secret for Graph configuration, set the app URI and created
access_as_user
scope under "Expose an API" in AAD.Added the client ID (from the following step) to the
knownClientApplications
section in the manifest for the server app registration in AAD.For API Permissions I added Graph scopes
User.Read
,User.Read.All
, andGroup.Read.All
and provided admin consent in the AAD UI.Configured
appsettings.json
in the API to add the Graph APIBaseUrl
and above scopes from step 2 along with the correct AzureAD domain,TenantId
,ClientId
, andClientSecret
values for MSAL to function.Configured MSAL on the server:
ANSWER
Answered 2022-Mar-10 at 22:30The issue here is use of the AddMicrosoftGraph
method when the API application is being built.
The GraphServiceClient
created by AddMicrosoftGraph
will have default access to delegated permissions which are assigned to users as opposed to application permissions which are assigned to applications. This is why the MsalUiRequiredException is being thrown which is usually resolved by prompting the user to login.
You can read more about delegated vs application permissions here.
What you can do instead is use the AddMicrosoftGraphAppOnly
method to create a GraphServiceClient
that will use credentials specific to your API to retrieve the relevant data needed from the Microsoft Graph API.
QUESTION
We have more number of common upstream pipelines - pipleline-a, pipleline-b, pipeline-c, pipeline-d … each in its own repository - repository-a, repository-b, repository-c, repository-d… My target pipeline, say pipeline-y in repository-y, has a dependency on these upstream pipelines artifacts and the target pipeline needs to build when there is a change to any of the upstream libraries and the corresponding upstream pipeline builds successfully. In other words, target pipeline-y needs to be triggered if any of the upstream pipelines completed successfully due to changes in them (CI triggers for upstream libraries work fine in their own pipelines).
We currently achieved this, using the resources pipelines trigger in the target pipeline-y, as below:
Upstream Pipeline - pipeline-a.yml
...ANSWER
Answered 2022-Mar-22 at 11:17It's not possible to dynamically specify resources in YAML.
A suggestion could be to use REST API hooks when new pipelines are added. Then trigger a program that generates new YAML for pipeline-y.yml.
QUESTION
So I was upgrading DAGs from airflow version 1.12.15 to 2.2.2 and DOWNGRADING python from 3.8 to 3.7 (since MWAA doesn't support python 3.8). The DAG is working fine on the previous setup but shows this error on the MWAA setup:
...ANSWER
Answered 2022-Feb-23 at 16:41For Airflow>=2.0.0 Assigning task to a DAG using bitwise shift (bit-shift) operators are no longer supported.
Trying to do:
QUESTION
I'm working on exporting data from Foundry datasets in parquet format using various Magritte export tasks to an ABFS system (but the same issue occurs with SFTP, S3, HDFS, and other file based exports).
The datasets I'm exporting are relatively small, under 512 MB in size, which means they don't really need to be split across multiple parquet files, and putting all the data in one file is enough. I've done this by ending the previous transform with a .coalesce(1)
to get all of the data in a single file.
The issues are:
- By default the file name is
part-0000-.snappy.parquet
, with a different rid on every build. This means that, whenever a new file is uploaded, it appears in the same folder as an additional file, the only way to tell which is the newest version is by last modified date. - Every version of the data is stored in my external system, this takes up unnecessary storage unless I frequently go in and delete old files.
All of this is unnecessary complexity being added to my downstream system, I just want to be able to pull the latest version of data in a single step.
...ANSWER
Answered 2022-Jan-13 at 15:27This is possible by renaming the single parquet file in the dataset so that it always has the same file name, that way the export task will overwrite the previous file in the external system.
This can be done using raw file system access. The write_single_named_parquet_file
function below validates its inputs, creates a file with a given name in the output dataset, then copies the file in the input dataset to it. The result is a schemaless output dataset that contains a single named parquet file.
Notes
- The build will fail if the input contains more than one parquet file, as pointed out in the question, calling
.coalesce(1)
(or.repartition(1)
) is necessary in the upstream transform - If you require transaction history in your external store, or your dataset is much larger than 512 MB this method is not appropriate, as only the latest version is kept, and you likely want multiple parquet files for use in your downstream system. The
createTransactionFolders
(put each new export in a different folder) andflagFile
(create a flag file once all files have been written) options can be useful in this case. - The transform does not require any spark executors, so it is possible to use
@configure()
to give it a driver only profile. Giving the driver additional memory should fix out of memory errors when working with larger datasets. shutil.copyfileobj
is used because the 'files' that are opened are actually just file objects.
Full code snippet
example_transform.py
QUESTION
I am trying to implement a simple collector, which takes a list of collectors and simultaneously collects values in slightly different ways from a stream.
It is quite similar to Collectors.teeing
, but differs in that it
- Receives a list of collectors instead of just two
- Requires all collectors to produce a value of the same type
The type signature I want to have is
...ANSWER
Answered 2022-Feb-07 at 13:37Handling a list of collectors with arbitrary accumulator types as a flat list can’t be done in a type safe way, as it would require declaring n type variables to capture these types, where n is the actual list size.
Therefore, you can only implement the processing as a composition of operations, each with a finite number of components know at compile time, like your recursive approach.
This still has potential for simplifications, like replacing downstreamCollectors.size() == 0
with downstreamCollectors.isEmpty()
or downstreamCollectors.stream().skip(1).toList()
with a copying free downstreamCollectors.subList(1, downstreamCollectors.size())
.
But the biggest impact has replacing the recursive code with a Stream Reduction operation:
QUESTION
What is the difference between these two approaches?
val result = remember(key1, key2) { computeIt(key1, key2) }
(Docs)val result by remember { derivedStateOf { computeIt(key1, key2) } }
(Docs)
Both avoid re-computation if neither key1
nor key2
has changed
.
The second also avoids re-computations if downstream states are derived, but else, they are identical in their behavior, aren't they?
ANSWER
Answered 2022-Jan-31 at 14:49AFAIK there is no difference here. It's just a coincidence that both constructs are doing the same thing here in this context. But, there are differences!
The biggest one is that derivedStateOf
is not composable and it does no caching on it's own (remember
does). So derivedStateOf
is meant for long running calculations that have to be run only if key changes. Or it can be used to merge multiple states that are not in composable (in custom class for example).
I think the exact explanation is blurred for "outsiders", we need some input from some compose team member here :). My source for the above is this one thread on slack and my own experiments
EDIT:
Today i learned another derivedStateOf
usage, very important one. It can be used to limit recomposition count when using some very frequently used value for calculation.
Example:
QUESTION
I am writing a lambda function that takes a list of CW Log Groups and runs an "export to s3" task on each of them.
I am writing automated tests using pytest
and I'm using moto.mock_logs
(among others), but create_export_tasks()
is not yet implemented (NotImplementedError
).
To continue using moto.mock_logs
for all other methods, I am trying to patch just that single create_export_task()
method using mock.patch
, but it's unable to find the correct object to patch (ImportError
).
I successfully used mock.Mock()
to provide me just the functionality that I need, but I'm wondering if I can do the same with mock.patch()
?
Working Code: lambda.py
ANSWER
Answered 2022-Jan-28 at 10:09I'm wondering if I can do the same with
mock.patch()
?
Sure, by using mock.patch.object()
:
QUESTION
New to NextFlow
, here, and struggling with some basic concepts. I'm in the process of converting a set of bash
scripts from a previous publication into a NextFlow
workflow.
I'm converting a simple bash script (included below for convenience) that did some basic prep work and submitted a new job to the cluster scheduler for each iteration.
Ultimate question: What is the most NextFlow-like way to incorporate this script into a NextFlow workflow (preferably using the new DSL2 schema)?
Possible subquestion: Is it possible to emit a list of lists based on bash
variables? I've seen ways to pass lists from workflows into processes, but not out of process. I could print each set of parameters to a file and then emit that file, but that doesn't seem very NextFlow-like.
I would really appreciate any guidance on how to incorporate the following bash
script into a NextFlow workflow. I have added comments and indicate the four variables that I need to emit as a set of parameters.
Thanks!
...ANSWER
Answered 2022-Jan-17 at 01:18What is the most NextFlow-like way to incorporate this script into a NextFlow workflow
In some cases, it is possible to incorporate third-party scripts that do not need to be compiled "as-is" by making them executable and moving them into a folder called 'bin' in the root directory of your project repository. Nextflow automatically adds this folder to the $PATH in the execution environment.
However, some scripts do not lend themselves for inclusion in this manner. This is especially the case if the objective is to produce a portable and reproducible workflow, which is how I interpret "the most Nextflow-like way". The objective ultimately becomes how run each process step in isolation. Given your example, below is my take on this:
QUESTION
To improve load times from a SQL server and improve performance I tried the following:
In a query called SQL_Query I use
...ANSWER
Answered 2022-Jan-13 at 10:03No, that won't work.
Your second query will issue another query of its own.
Buffer can help if you are reusing data within the same query, like referencing the same set multiple times.
Keep in mind in Power query there is a memory limit for the amount of data that will be put into memory, around 256MB, after that will start paging the data.
Consider having a look at this link from Chris Webb for an example:
improving-power-query-calculation-performance-with-list-buffer
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install downstream
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page