circuitry | Decouple ruby applications using SNS fanout with SQS | Pub Sub library
kandi X-RAY | circuitry Summary
kandi X-RAY | circuitry Summary
Decouple ruby applications using SNS fanout with SQS processing. A Circuitry publisher application can broadcast events which can be fanned out to any number of SQS queues. This technique is a common approach to implementing an enterprise message bus. For example, applications which care about billing or new user onboarding can react when a user signs up, without the origin web application being concerned with those domains. In this way, new capabilities can be connected to an enterprise system without change proliferation.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Subscribes to a message .
- Lock a lock
- Subscribes to the queue .
- Process a message from the middleware .
- Poll the queue for messages
- Initializes a new instance .
- Sets the configuration of the consumer .
- Configures the configuration options for this broker
- Unsubscribes to the server .
- Handle message .
circuitry Key Features
circuitry Examples and Code Snippets
Community Discussions
Trending Discussions on circuitry
QUESTION
I'm new to Chisel, and I was wondering if it's possible to calculate constants in software before Chisel begins designing any circuitry. For instance, I have a module which takes one parameter, myParameter
, but from this parameter I'd like to derive more variables (constant1
and constant2
) that would be later used to initialize registers.
ANSWER
Answered 2021-Mar-16 at 00:28I was wondering if it's possible to calculate constants in software before Chisel begins designing any circuitry
Unless I'm misunderstanding your question, this is, in fact, how Chisel works.
Fundamentally, Chisel is a Scala library where the execution of your compiled Scala code creates hardware. This means that any pure-Scala code in your Chisel only exists at elaboration time, that is, during execution of this Scala program (which we call a generator).
Now, values in your program are created in sequential order as defined by Scala (and more-or-less the same as any general purpose programming language). For example, io
is defined before constant1
and constant2
so the Chisel object for io
will be created before either constants are calculated, but this shouldn't really matter for the purposes of your question.
A common practice in Chisel is to create custom classes to hold parameters when you have a lot of them. In this case, you could do something similar like this:
QUESTION
Introduction
I have been working on writing my own bare metal code for a Raspberry PI as I build up my bare metal skills and learn about kernel mode operations. However, due to the complexity, amount of documentation errors, and missing/scattered info, it has been extremely difficult to finally bring up a custom kernel on the Raspberry PI. However, I finally got that working.
A very broad overview of what is happening in the bootstrap process
My kernel loads into 0x80000, sends all cores except core 0 into an infinite loop, sets up the Stack Pointer, and calls a C function. I can setup the GPIO pins and turn them on and off. Using some additional circuitry, I can drive LEDs and confirm that my code is executing.
The problem
However when it comes to the UART, I have hit a wall. I am using UART0 (PL011). As far as I can tell, the UART is not outputting, although I could be missing it on my scope since I only have an analog oscilloscope. The code gets stuck when outputting the string. I have determined through hours of reflashing my SD card with different YES/NO questions to my LEDs that it is stuck in an infinite loop waiting for the UART Transmit FIFO Full flag to clear. The UART only accepts 1 byte before becoming full. I can not figure out why it is not transmitting the data out. I am also not sure if I have correctly set my baud-rate, but I don't think that would cause the TX FIFO to stay filled.
Getting a foothold in the code
Here is my code. The execution begins at the very beginning of the binary. It is constructed by being linked with symbol "my_entry_pt" from assembly source "entry.s" in the linker script. That is where you will find the entry code. However, you probably only need to look at the last file which is the C code in "base.c". The rest is just bootstrapping up to that. Please disregard some comments/names which don't make sense. This is a port (of primarily the build infrastructure) from an earlier bare-metal project of mine. That project used a RISC-V development board which uses a memory mapped SPI flash to store the binary code of the program.:
[Makefile]
...ANSWER
Answered 2021-Feb-25 at 06:27My suggestion:
- flash your SD card to a rpi distribution to make sure the hardware is still working
- if the hardware is good, check the difference of your code with the in-kernel serial driver
QUESTION
I am developing an application to control fan smartly. The smart fan control circuitry comprises Node MCU. There will be two modes of fan control, smart and auto. In the smart mode, the fan's speed could be changed from the predicted value obtained through the application of Machine learning predictive algorithms. The predicted value is generated by python scripts at the server, which the mobile application needs to fetch from the server. I need to have an MQTT broker in between this communication cycle, in which the application will get data from the server through MQTT protocol, similarly, Node MCUs and Mobile Applications will communicate through that MQTT broker. I am using an open-source EMQ MQTT broker. There are two options for the EMQ MQTT broker: one is EMQx and the other is EMQ cloud, whose services are quite expensive. I need to develop my MQTT cloud service in which the MQTT broker software will be open source EMQ broker that would be deployed on my own cloud so that it could be connected to the server and different clients ( Node MCUs and Mobile Applications), thus I would not be needing to avail MQTT cloud services offered by EMQ cloud.
I am a newbie to the internet of things. After research on the internet, I gained this insight to develop this project. Kindly guide me on how to set up this MQTT cloud service so that different clients could be connected to the MQTT broker over the internet.
I will be grateful for your technical assistance.
...ANSWER
Answered 2020-Jun-04 at 06:35Maybe you can use emqx public broker: broker.emqx.io:1883 for testing
QUESTION
I'm trying to implement the following nor flip flop circuit logic in Go and having some difficulty with variable declarations:
My goal is to simulate the logic gates and circuitry as it would physically work. I've implemented a function for the nor gates [func nor()] and the flip flop itself [func norFlipFlop()]. The issue I'm facing is in declaring out0 and out1 since they depend on each other. As can be seen below out0 is defined as nor(a1, out1) and out1 is defined as nor(out0, a0). This obviously spits out a compilation error since out1 is not yet initialized and defined when out0 is defined. Is there a way to make this logic work while keeping it as close to the physical circuit logic as possible?
...ANSWER
Answered 2020-May-23 at 04:50First, a flip flop stores state, so you need some sort of value to retain the state. Also, apart from a condition (usually avoided in hardware) where A0 and A1 are 0 (false) and Out0 and Out1 are both 1 (true) the outputs (Out0 and Out1) are usually the complement of each other and a flip flop effectively stores only a single boolean value so you can just use a bool
. You typically "pulse" the inputs to set (make true) or reset (make false) the flip-flop's value. Eg:
QUESTION
This question is specifically aimed at modern x86-64 cache coherent architectures - I appreciate the answer can be different on other CPUs.
If I write to memory, the MESI protocol requires that the cache line is first read into cache, then modified in the cache (the value is written to the cache line which is then marked dirty). In older write-though micro-architectures, this would then trigger the cache line being flushed, under write-back the cache line being flushed can be delayed for some time, and some write combining can occur under both mechanisms (more likely with writeback). And I know how this interacts with other cores accessing the same cache-line of data - cache snooping etc.
My question is, if the store matches precisely the value already in the cache, if not a single bit is flipped, does any Intel micro-architecture notice this and NOT mark the line as dirty, and thereby possibly save the line from being marked as exclusive, and the writeback memory overhead that would at some point follow?
As I vectorise more of my loops, my vectorised-operations compositional primitives don't explicitly check for values changing, and to do so in the CPU/ALU seems wasteful, but I was wondering if the underlying cache circuitry could do it without explicit coding (eg the store micro-op or the cache logic itself). As shared memory bandwidth across multiple cores becomes more of a resource bottleneck, this would seem like an increasingly useful optimisation (eg repeated zero-ing of the same memory buffer - we don't re-read the values from RAM if they're already in cache, but to force a writeback of the same values seems wasteful). Writeback caching is itself an acknowledgement of this sort of issue.
Can I politely request holding back on "in theory" or "it really doesn't matter" answers - I know how the memory model works, what I'm looking for is hard facts about how writing the same value (as opposed to avoiding a store) will affect the contention for the memory bus on what you may safely assume is a machine running multiple workloads that are nearly always bound by memory bandwidth. On the other hand an explanation of precise reasons why chips don't do this (I'm pessimistically assuming they don't) would be enlightening...
Update: Some answers along the expected lines here https://softwareengineering.stackexchange.com/questions/302705/are-there-cpus-that-perform-this-possible-l1-cache-write-optimization but still an awful lot of speculation "it must be hard because it isn't done" and saying how doing this in the main CPU core would be expensive (but I still wonder why it can't be a part of the actual cache logic itself).
Update (2020): Travis Downs has found evidence of Hardware Store Elimination but only, it seems, for zeros and only where the data misses L1 and L2, and even then, not in all cases. His article is highly recommended as it goes into much more detail.... https://travisdowns.github.io/blog/2020/05/13/intel-zero-opt.html
...ANSWER
Answered 2017-Nov-21 at 17:35Currently no implementation of x86 (or any other ISA, as far as I know) supports optimizing silent stores.
There has been academic research on this and there is even a patent on "eliminating silent store invalidation propagation in shared memory cache coherency protocols". (Googling '"silent store" cache' if you are interested in more.)
For x86, this would interfere with MONITOR/MWAIT; some users might want the monitoring thread to wake on a silent store (one could avoid invalidation and add a "touched" coherence message). (Currently MONITOR/MWAIT is privileged, but that might change in the future.)
Similarly, such could interfere with some clever uses of transactional memory. If the memory location is used as a guard to avoid explicit loading of other memory locations or, in an architecture that supports such (such was in AMD's Advanced Synchronization Facility), dropping the guarded memory locations from the read set.
(Hardware Lock Elision is a very constrained implementation of silent ABA store elimination. It has the implementation advantage that the check for value consistency is explicitly requested.)
There are also implementation issues in terms of performance impact/design complexity. Such would prohibit avoiding read-for-ownership (unless the silent store elimination was only active when the cache line was already present in shared state), though read-for-ownership avoidance is also currently not implemented.
Special handling for silent stores would also complicate implementation of a memory consistency model (probably especially x86's relatively strong model). Such might also increase the frequency of rollbacks on speculation that failed consistency. If silent stores were only supported for L1-present lines, the time window would be very small and rollbacks extremely rare; stores to cache lines in L3 or memory might increase the frequency to very rare, which might make it a noticeable issue.
Silence at cache line granularity is also less common than silence at the access level, so the number of invalidations avoided would be smaller.
The additional cache bandwidth would also be an issue. Currently Intel uses parity only on L1 caches to avoid the need for read-modify-write on small writes. Requiring every write to have a read in order to detect silent stores would have obvious performance and power implications. (Such reads could be limited to shared cache lines and be performed opportunistically, exploiting cycles without full cache access utilization, but that would still have a power cost.) This also means that this cost would fall out if read-modify-write support was already present for L1 ECC support (which feature would please some users).
I am not well-read on silent store elimination, so there are probably other issues (and workarounds).
With much of the low-hanging fruit for performance improvement having been taken, more difficult, less beneficial, and less general optimizations become more attractive. Since silent store optimization becomes more important with higher inter-core communication and inter-core communication will increase as more cores are utilized to work on a single task, the value of such seems likely to increase.
QUESTION
I've read that in order to temporarily turn off paging according to Intel's system programming guide (volume 3 chapter 9.9) I should disable interrupts before doing anything else. I can easily disable maskable interrupts with cli but all the manual says about disabling NMI's is
NMI interrupts can be disabled with external circuitry.(Software must guarantee that no exceptions or interrupts are generated during the mode switching operation.)
I've found code that looks like C code for disabling NMI's at this OSDEV page but I don't quite understand what it's supposed to mean
...ANSWER
Answered 2019-Mar-28 at 09:55"with external circuitry" means that on the board there are gates before the NMI pins of the processor chip and if these gates are turned off (closed), no interrupt signals will reach the processor chip's NMI pins.
The outb
calls probably activate/deactivate these gates.
NMI means non-maskable and it means you cannot disable them with software only.
QUESTION
I have developed a custom STM32L475 board with one GPIO pin wired up for synchronization along some other circuitry for the synchronization, unfortunately we decided to route the generated sinus signal from module to module. This is not optimal so I want to optimize so it is not the sinus signal which is routed from master module to slave modules, but to just transfer a digital trigger to restart the generation of a full sine wave.
To do this I need to be able to setup the MCU's to use the one GPIO pin on each MCU as both output and trigger for a timer.
To do this without an update of the HW I need to be able to combine: 1. Using the 3 pins (one from each MCU) as open drain outputs as an AND gate, this works. 2. I know the GPIO pin can be used as external trigger, triggering on a negative edge.
The question is, is it possible to trig a timer of an output pin using only one GPIO pin, to make the MCU which finalizes its sine generation first trigger itself and the other MCU's, and if so, how? Please note, it must use the level of the output pin itself, eventhough it is an outputpin.
I am a HW developer, learning to do firmware for our HW, so I am kind of new to software development, so I am using HAL, please be nice
...ANSWER
Answered 2020-Apr-15 at 09:30STM32L475 allows to configure a GPIO in different modes that must be (exclusively) selected through the corresponding GPIOx_MODER register:1
- (Digital) Input mode
- General purpose output mode
- Alternate function mode
- Analog mode
The alternate function applied in Alternate function mode must also be selected exclusively, through the corresponding GPIOx_AFRL or GPIOx_AFRH register, resp.2
The trigger for an interrupt or timer is an alternate function, and the output of a (analogue or digital) signal is a (different) alternate function, too. Therefore, I think there is no solution to the given problem based on peripheral configuration.
1 Reference Manual, Rev 7: See
- Section 8.5.1 for GPIO mode selection
- Figures 23/24 in Section 8.4 for explanation
2 ibid.: See
- Section 8.5.9 for GPIO alternate function selection
- Section 8.4.2 for explanation
QUESTION
I've recently been learning electric circuitry using arduino and am looking to implement some changes to my Raspberry Pi application.
I used this outdated tutorial a few years ago to create my pi bluetooth receiver which is working well at the moment (https://www.instructables.com/id/Turn-your-Raspberry-Pi-into-a-Portable-Bluetooth-A/) but one downfall of this out-dated tutorial is that bluetooth connections have to be accepted via the screen (which is off because bluetooth speakers do not have screens).
My plan: use a button to accept bluetooth connections and use a flashing green LED to indicate a connection request.
How can I create a script that 'listens' for bluetooth pairing requests and run python code accordingly when its listening? With this, how can I connect to the bluetooth to accept a pair request?
I'm not too familiar with Raspberry Pi script placement, but am familiar with Python and know how I can connect to GPIO.
Thanks :)
...ANSWER
Answered 2020-Feb-02 at 21:01What you are searching for is called a Bluetooth Agent. You need to use an official linux bluetooth protocol stack BlueZ. There is documentation describing the Agent API link. It uses DBus for communication. You need to invoke the following steps:
- Create a bluetooth agent written in python and publish it at certain DBus object path. Your agent must implement org.bluez.Agent1 interface as described in Agent API doc.
- Then you need to register this agent by calling RegisterAgent method from Agent API. Here you will provide the DBus path where your agent is located and also you will provide the capability in your case "DisplayYesNo" (LED as a display for pairing request, and button with some timeout for implementing Yes/No).
Also register your agent as a default agent by calling RequestDefaultAgent
Now if you try to pair with your device the appropriate function in your agent will be called (I think for your use case it will be RequestAuthorization) If you want to accept the pairing you will just return from this function, if you want to reject the pairing you must throw a DBus error inside this function.
As a starting point for you I would suggest you to look at this simple python Agent: https://github.com/pauloborges/bluez/blob/master/test/simple-agent It implements all the functionality you need so just update it according to your needs.
Have fun :)
QUESTION
Note: The suggestion provided by MariposaGentil worked for me (using
...ANSWER
Answered 2019-Aug-08 at 08:47The problem are that
characters, they can't be read by Arduino IDE.
The &nmsp;
or other special character neither works because of the same reason, they are speciall characters that are not copied correctly into the ide.
QUESTION
The goal is to determine metrics of an UDP protocol performance, specifically:
- Minimal possible Theoretical RTT (round-trip time, ping)
- Maximal possible Theoretical PPS of 1-byte-sized UDP Packets
- Maximal possible Theoretical PPS of 64-byte-sized UDP Packets
- Maximal and minimal possible theoretical jitter
This could and should be done without taking in account any slow software-caused issues(like 99% cpu usage by side process, inefficiently-written test program), or hardware (like busy channel, extremely long line, so on)
How should I go with estimating these best-possible parameters on a "real system"?
PS. I would offer a prototype, of what I call "a real system".
Consider 2 PCs, PC1 and PC2. They both are equipped with:
- modern fast processors(read "some average typical socket-1151 i7 CPU"), so processing speed and single-coreness are not an issues.
- some typical DDR4 @2400mhz..
- average NICs (read typical Realteks/Intels/Atheroses, typically embedded in mobos), so there is no very special complicated circuitry.
- a couple meters of ethernet 8 pair cable that connects their NICs, having established GBIT connection. So no internet, no traffic between them, other that generated by you.
- no monitors
- no any other I/O devices
- single USB flash per PC, that booted their initramfs to the RAM, and used to mount and store program output after test program finishes
- lightest possible software stack - There is probably busy box, running on top of latest Linux kernel, all libs are up-to-date. So virtually no software(read "busyware") runs on them.
And you run a server test program on PC1, and a client - on PC2. After program runs, USB stick is mounted and results are dumped to file, and system powers down then. So, I've described some ideal situation. I can't imagine more "sterile" conditions for such an experiment..
...ANSWER
Answered 2019-Jun-29 at 00:32For the PPS calculations take the total size of the frames and divide it into the Throughput of the medium.
For IPv4:
Ethernet Preamble and start of frame and the interframe gap 7 + 1 + 12 = 20 bytes.(not counted in the 64 byte minimum frame size)
Ethernet II Header and FCS(CRC) 14 + 4 = 18 bytes. IP Header 20 bytes. UDP Header 8 bytes.
Total overhead 46 bytes(padded to min 64 if payload is less than ) + 20 bytes "more on the wire"
Payload(Data)
1 byte payload - becomes 18 based on 64 byte minimum + wire overhead. Totaling 84 bytes on the wire.
64 byte - 48 + 64 = 112 + 20 for the wire overhead = 132 bytes.
If the throughput of the medium is 125000000 bytes per second(1 Gb/s).
1-18 bytes of payload = 1.25e8 / 84 = max theoretical 1,488,095 PPS.
64 bytes payload = 1.25e8 / 132 = max theoretical 946,969 PPS.
These calculations assume a constant stream: The network send buffers are filled constantly. This is not an issue given your modern hardware description. If this were 40/100 Gig Ethernet CPU, bus speeds and memory would all be factors.
Ping RTT time:
To calculate the time it takes to transfer data through a medium divide the data transferred by the speed of the medium.
This is harder since the ping data payload could be any size 64 - MTU(~1500 bytes). ping typically uses the min frame size (64 bytes total frame size + 20 bytes wire overhead * 2 = 168 bytes) Network time(0.001344 ms) + Process response and reply time combined estimated between 0.35 and 0.9 ms. This value depends on too many internal CPU and OS factors, L1-3 caching, branch predictions, ring transitions (0 to 3 and 3 to 0) required, TCP/IP stack implemented, CRC calculations, interrupts processed, network card drivers, DMA, validation of data(skipped by most implementations)...
Max time should be < 1.25 ms based on anecdotal evidence.(My best eval was 0.6ms on older hardware(I would expect a consistent average of 0.7 ms or less on the hardware as described)).
Jitter: The only inherent theoretical reason for network jitter is the asynchronous nature of transport which is resolved by the preamble. Max < (8 bytes)0.000512 ms. If sync is not established in this time the entire frame is lost. This is possibility that needs to be taken into account. Since UDP is best effort delivery.
As evidenced by the description of RTT: The possible variances in the CPU time in executing of identical code, as well as OS scheduling, and drivers makes this impossible to evaluate effectively.
If I had to estimate, I would design for a maximum of 1 ms jitter, with provisions for lost packets. It would be unwise to design a system intolerant of faults. Even for a "Perfect Scenario" as described faults will occur (a nearby lightening strike induces spurious voltages on the wire). UDP has no inherent method for tolerating lost packets.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install circuitry
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page