litmus | Litmus helps SREs and developers practice chaos engineering in a Cloud-native way. Chaos experiment | Testing library

 by   litmuschaos HTML Version: 3.0.0-beta7 License: Apache-2.0

kandi X-RAY | litmus Summary

kandi X-RAY | litmus Summary

litmus is a HTML library typically used in Testing applications. litmus has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

LitmusChaos is an open source Chaos Engineering platform that enables teams to identify weaknesses & potential outages in infrastructures by inducing chaos tests in a controlled way. Developers & SREs can practice Chaos Engineering with Litmus as it is easy to use, based on modern chaos engineering principles & community collaborated. It is 100% open source & a CNCF project. Litmus takes a cloud-native approach to create, manage and monitor chaos. The platform itself runs as a set of microservices and uses Kubernetes custom resources to define the chaos intent, as well as the steady state hypothesis.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              litmus has a medium active ecosystem.
              It has 3693 star(s) with 572 fork(s). There are 69 watchers for this library.
              There were 1 major release(s) in the last 12 months.
              There are 242 open issues and 1051 have been closed. On average issues are closed in 333 days. There are 78 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of litmus is 3.0.0-beta7

            kandi-Quality Quality

              litmus has 0 bugs and 0 code smells.

            kandi-Security Security

              litmus has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              litmus code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              litmus is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              litmus releases are available to install and integrate.
              Installation instructions are available. Examples and code snippets are not available.
              It has 73722 lines of code, 1679 functions and 658 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of litmus
            Get all kandi verified functions for this library.

            litmus Key Features

            No Key Features are available at this moment for litmus.

            litmus Examples and Code Snippets

            No Code Snippets are available at this moment for litmus.

            Community Discussions

            QUESTION

            Does C++11 sequential consistency memory order forbid store buffer litmus test?
            Asked 2022-Mar-29 at 03:12

            Consider the store buffer litmus test with SC atomics:

            ...

            ANSWER

            Answered 2022-Mar-29 at 03:12

            That cppreference summary of SC is too weak, and indeed isn't strong enough to forbid this reordering.

            What it says looks to me only as strong as x86-TSO (acq_rel plus no IRIW reordering, i.e a total store order that all reader threads can agree on).

            ISO C++ actually guarantees that there's a total order of all SC operations including loads (and also SC fences) that's consistent with program order. (That's basically the standard definition of sequential consistency in computer science; C++ programs that use only seq_cst atomic operations and are data-race-free for their non-atomic accesses execute sequentially consistently, i.e. "recover sequential consistency" despite full optimization being allowed for the non-atomic accesses.) Sequential consistency must forbid any reordering between any two SC operations in the same thread, even StoreLoad reordering.

            This means an expensive full barrier (including StoreLoad) after every seq_cst store, or for example AArch64 STLR / LDAR can't StoreLoad reorder with each other, but are otherwise only release and acquire wrt. reordering with other operations. (So cache-hit SC stores can be quite a lot cheaper on AArch64 than x86, if you don't do any SC load or RMW operations in the same thread right afterwards.)

            See https://eel.is/c++draft/atomics.order#4 That makes it clear that SC operations aren't reordered wrt. each other. The current draft standard says:

            31.4 [atomics.order]

            1. There is a single total order S on all memory_­order​::​seq_­cst operations, including fences, that satisfies the following constraints. First, if A and B are memory_­order​::​seq_­cst operations and A strongly happens before B, then A precedes B in S.

            Second, for every pair of atomic operations A and B on an object M, where A is coherence-ordered before B, the following four conditions are required to be satisfied by S:

            • (4.1) if A and B are both memory_­order​::​seq_­cst operations, then A precedes B in S; and
            • (4.2 .. 4.4) - basically the same thing for sc fences wrt. operations.

            Sequenced before implies strongly happens before, so the opening paragraph guarantees that S is consistent with program order.

            4.1 is about ops that are coherenced-ordered before/after each other. i.e. a load that happens to see the value from a store. That ties inter-thread visibility into the total order S, making it match program order. The combination of those two requirements forces a compiler to use full barriers (including StoreLoad) to recover sequential consistency from whatever weaker hardware model it's targeting.

            (In the original, all of 4. is one paragraph. I split it to emphasize that there are two separate things here, one for strongly-happens-before and the list of ops/barriers for coherence-ordered-before.)

            These guarantees, plus syncs-with / happens-before, are enough to recover sequential consistency for the whole program, if it's data-race free (that would be UB), and if you don't use any weaker memory orders.

            These rules do still hold if the program involves weaker orders, but for example an SC fence between two relaxed operations isn't as strong as two SC loads. For example on PowerPC that wouldn't rule out IRIW reordering the way using only SC operations does; IIRC PowerPC needs barriers before SC loads, as well as after.

            So having some SC operations isn't necessarily enough to recover sequential consistency everywhere; that's rather the point of using weaker operations, but it can be a bit surprising that other ops can reorder wrt. SC ops. SC ops aren't SC fences. See also this Q&A for an example with the same "store buffer" litmus test: weakening one store from seq_cst to release allows reordering.

            Source https://stackoverflow.com/questions/70204442

            QUESTION

            Email template td widths change based on content in outlook only
            Asked 2022-Mar-10 at 01:39

            I have a table containing two which are set to fixed widths but they seem to change depending on there content on outlook only...

            If I put a test image in each they align fine and are equal width and the same if I put text in each but if I put a image in one and text in the other the image one grows and the text one shrinks despite the image being a fixed size to fit its container.

            This only happens on outlook and looks fine on everything else:

            Here is a link to litmus to see the issue:

            https://litmus.com/checklist/emails/public/ef3ee40#ol2007

            And below is the code with three examples which is the same structure but just different content:

            .

            ...

            ANSWER

            Answered 2022-Mar-10 at 01:39

            If you use percentages, that will work for Outlook (& everything else). I've also taken out the width of the

            (290px), since the width is already set in the and that might confuse something on mobiles that don't have that much space available.

            Source https://stackoverflow.com/questions/71416169

            QUESTION

            Gmail not expanding to 100% width only on iOS
            Asked 2022-Feb-17 at 14:47

            been struggling with this issue for a bit now and its really bugging me. Basically I have some email templates that I've been working on, they work fine on all clients (Litmus tests) except for Gmail specifically on iOS, Android works fine. The issue I'm having is that I want all my tables to me 100% width so they're all the same size, however gmail resizes the tables seemingly based off the content inside.

            Heres a section of my code:

            ...

            ANSWER

            Answered 2022-Feb-17 at 14:47

            This sounds like this might be due to this bug, where Gmail adds a .munged class to

            s and s with a width:auto!important.

            A solution would be to add a min-width:100% to each

            and potentially impacted.

            Source https://stackoverflow.com/questions/71159288

            QUESTION

            GNU ARM assembler giving a seemingly irrelevant register in error message
            Asked 2022-Feb-15 at 11:20
            Goal

            I'm building a mutex primitive using gcc inline assembly for a CortexM7 target using the LDREX and STREX instructions, following the Barrier Litmus Tests and Cookbook document from ARM.

            Code ...

            ANSWER

            Answered 2022-Feb-15 at 11:20

            Per @jester's help, I realized I had the wrong constraint on the GCC-inline variable alias for the lock. It should have been "+m", specifying a memory address instead of a register.

            I was also de-referencing the address of the lock when I should have been leaving it as a pointer.

            I changed [lock] "+l"(*lock) to [lock] "+m"(lock) and it now builds.

            Source https://stackoverflow.com/questions/71065708

            QUESTION

            Is it possible set dark mode image in email?
            Asked 2021-Dec-23 at 14:55

            I would like not only change text-colors but also various images and icons in the emails. Is it possible/shall I use media-query as the docs say? Shall I used display: none, display: block for images, in different night-day conditions? Will the different email clients not interfere with it?

            https://www.litmus.com/blog/the-ultimate-guide-to-dark-mode-for-email-marketers/

            ...

            ANSWER

            Answered 2021-Dec-23 at 14:49

            Looks like the way to go (using both @media & [data-ogsc]-prefixes).

            "Will the different email clients not interfere with it?"

            The docs say:

            "As noted above, how email clients in Dark Mode handle your regular HTML emails will vary."

            So you can't be sure if your styles will be overridden or not - that depends on the interpretation of the various and numerous mail-clients.

            As the docs mention, @media (prefers-color-scheme: dark) allows you to create the most robust custom Dark Mode themes where you can implement anything from Dark Mode-specific image swaps, hover effects, background images, while [data-ogsc] prefixes to each CSS rule, target specificly the Outlook app.

            "Shall I used display: none, display: block for images, in different night-day conditions?"

            Could be an option, i.E. via getting the clients systime via JS upon openening/reading the mail..

            Though be aware of:

            Old clients, such as Lotus Notes, Mozilla Thunderbird, Outlook Express, and Windows Live Mail all seem to have supported some sort of JavaScript execution. Nothing else does.

            It seems like a bad idea security-wise, so I would expect this to be a feature that won't always be around, even in these clients.

            As stated in the accepted answer to this question

            So again, you can't rely on JS being enabled/allowed in the various clients (even in message template scripts)..

            Source https://stackoverflow.com/questions/70463389

            QUESTION

            Can release+acquire break happens-before?
            Asked 2021-Nov-15 at 13:23

            Many programming languages today have happens-before relation and release+acquire synchronization operations.

            Some of these programming languages:

            I would like to know if release+acquire can violate happens-before:

            • if it's possible, then I would like to see an example
            • if it's impossible, then I would like to get simple and clear explanations why
            What is release+acquire and happens-before

            Release/acquire establishes happens-before relation between different threads: in other words everything before release in Thread 1 is guaranteed to be visible in Thread 2 after acquire:

            ...

            ANSWER

            Answered 2021-Nov-01 at 04:59

            I would like to know if release+acquire can violate happens-before.

            Happens-before relationship cannot be "violated", as it is a guarantee. Meaning, if you established it in a correct way, it will be there, with all its implications (unless there is a bug in the compiler).

            However, establishing just any happens-before relationship doesn't guarantee that you've avoided all possible race conditions. You need to establish carefully chosen relationships between relevant operations, that will eliminate all scenarios when data race is possible.

            Let's review this code snippet:

            Source https://stackoverflow.com/questions/69791898

            QUESTION

            Nested Table Width - Outlook Email
            Asked 2021-Nov-05 at 02:00

            I've created an email template which is rendering incorrectly on Outlook Desktop for Windows (2016, 2019).

            The entire layout is a single table, with different parts of the email taking up a row ().

            I have two nested tables, each in their own , with the exact same markup. When I tested it out on Litmus, the second table is narrower than the first.

            How the email is rendered:

            The markup:

            ...

            ANSWER

            Answered 2021-Nov-05 at 02:00

            Through trial and error I found the incriminating line of CSS.

            Source https://stackoverflow.com/questions/69844779

            QUESTION

            Reason for the name of the "store buffer" litmus test on x86 TSO memory model
            Asked 2021-Sep-11 at 08:47

            I've been studying the memory model and saw this (quote from https://research.swtch.com/hwmm):

            ...

            ANSWER

            Answered 2021-Sep-11 at 08:47

            It makes some sense to call StoreLoad reordering an effect of the store buffer because the way to prevent it is with mfence or a locked instruction that drains the store buffer before later loads are allowed to read from cache. Merely serializing execution (with lfence) would not be sufficient, because the store buffer still exists. Note that even sfence ; lfence isn't sufficient.

            Also I assume P5 Pentium (in-order dual-issue) has a store buffer, so SMP systems based on it could have this effect, in which case it would definitely be due to the store buffer. IDK how thoroughly the x86 memory model was documented in the early days before PPro even existed, but any naming of litmus tests done before that might well reflect in-order assumptions. (And naming after might include still-existing in-order systems.)

            You can't tell which effect caused StoreLoad reordering. It's possible on a real x86 CPU (with a store buffer) for a later load to execute before the store has even written its address and data to the store buffer.

            And yes, executing a store just means writing to the store buffer; it can't commit from the SB to L1d cache and become visible to other cores until after the store retires from the ROB (and thus is known to be non-speculative).

            (Retirement happens in-order to support "precise exceptions". Otherwise, chaos ensues and discovering a mis-predict might mean rolling back the state of other cores, i.e. a design that's not sane. Can a speculatively executed CPU branch contain opcodes that access RAM? explains why a store buffer is necessary for OoO exec in general.)

            I can't think of any detectable side-effect of the load uop executing before the store-data and/or store-address uops, or before the store retires, rather than after the store retires but before it commits to L1d cache.

            You could force the latter case by putting an lfence between the store and the load, so the reordering is definitely caused by the store buffer. (A stronger barrier like mfence, a locked instruction, or a serializing instruction like cpuid, will all block the reordering entirely by draining the store buffer before the later load can execute. As an implementation detail, before it can even issue.)

            A normal out of order exec treats all instructions as speculative, only becoming non-speculative when they retire from the ROB, which is done in program order to support precise exceptions. (See Out-of-order execution vs. speculative execution for a more in-depth exploration of that idea, in the context of Intel's Meltdown vulnerability.)

            A hypothetical design with OoO exec but no store buffer would be possible. It would perform terribly, with each store having to wait for all previous instructions to be definitively known to not fault or otherwise be mispredicted / mis-speculated before the store can be allowed to execute.

            This is not quite the same thing as saying that they need to have already executed, though (e.g. just executing the store-address uop of an earlier store would be enough to know it's non-faulting, or for a load doing the TLB/page-table checks will tell you it's non-faulting even if the data hasn't arrived yet). However, every branch instruction would need to be already executed (and known-correct), as would every ALU instruction like div that can.

            Such a CPU also doesn't need to stop later loads from running before stores. A speculative load has no architectural effect / visibility, so it's ok if other cores see a share-request for a cache line which was the result of a mis-speculation. (On a memory region whose semantics allow that, such as normal WB write-back cacheable memory). That's why HW prefetching and speculative execution work in normal CPUs.

            The memory model even allows StoreLoad ordering, so we're not speculating on memory ordering, only on the store (and other intervening instructions) not faulting. Which again is fine; speculative loads are always fine, it's speculative stores that we must not let other cores see. (So we can't do them at all if we don't have a store buffer or some other mechanism.)

            (Fun fact: real x86 CPUs do speculate on memory ordering by doing loads out of order with each other, depending on addresses being ready or not, and on cache hit/miss. This can lead to memory order mis-speculation "machine clears" aka pipeline nukes (machine_clears.memory_ordering perf event) if another core wrote to a cache line between when it was actually read and the earliest the memory model said we could. Or even if we guess wrong about whether a load is going to reload something stored recently or not; memory disambiguation when addresses aren't ready yet involves dynamic prediction so you can provoke machine_clears.memory_ordering with single-threaded code.)

            Out-of-order exec in P6 didn't introduce any new kinds of memory re-ordering because that could have broken existing multi-threaded binaries. (At that time mostly just OS kernels, I'd guess!) That's why early loads have to be speculative if done at all. x86's main reason for existence it backwards compat; back then it wasn't the performance king.

            Re: why this litmus test exists at all, if that's what you mean?
            Obviously to highlight something that can happen on x86.

            Is StoreLoad reordering important? Usually it's not a problem; acquire / release synchronization is sufficient for most inter-thread communication about a buffer being ready to read, or more generally a lock-free queue. Or to implement mutexes. ISO C++ only guarantees that mutexes lock / unlock are acquire and release operations, not seq_cst.

            It's pretty rare that an algorithm depends on draining the store buffer before a later load.

            Say I somehow observed this litmus test on an x86 machine,

            Fully working program that verifies that this reordering is possible in real life on real x86 CPUs: https://preshing.com/20120515/memory-reordering-caught-in-the-act/. (The rest of Preshing's articles on memory ordering are also excellent. Great for getting a conceptual understanding of inter-thread communication via lockless operations.)

            Source https://stackoverflow.com/questions/69112020

            QUESTION

            Unwished indent and dash added in first line of yaml file with ruamel.yaml
            Asked 2021-Jul-12 at 11:21

            I am currently using the following code to load in a single-document template YAML file, changing it slightly, and generating (i.e., dumping) different new deployment files. The code looks like this:

            ...

            ANSWER

            Answered 2021-Jul-11 at 20:46

            Without having the source of define_exp_parameters() it is impossible to exactly describe what goes wrong. But before calling that deployments is a list containing a single element that is a dict (with keys apiVersion, kind, etc.). And after that call deployments is a list of single elements list (which elements is aformentioned dict). You iterate over the "outer" list and dump a single element list, which, in block style, gives you the - which is the block sequence element indicator.

            If you can't fix define_exp_parameters() to return a list for which each element is a dict again, you can just dump the first element of deployment:

            Source https://stackoverflow.com/questions/68339199

            QUESTION

            Jquery window.resize(event) => {} works for inspect element but not when the chrome (or edge) window is resized
            Asked 2021-Jun-19 at 16:33

            There are many many questions regarding resize (event) not working online, but I was only able to find one that actually reflected my exact problem but did not have an answer.

            When I use inspector, my website changes from the desktop version to the mobile version when it reaches the breakpoint of <= 540px width. However, when I resize the entire chrome window, nothing happens (even though my window does get smaller than 540px width).

            I'm not sure if the mobile version will actually work on a mobile as I have no way of testing that currently, but I'm unsure as to whether this is a normal thing with Chrome and the website will work perfectly well on desktop and mobile or whether I'm doing something wrong.

            The related piece of code:

            ...

            ANSWER

            Answered 2021-Jun-19 at 16:33

            The problem is not with the resize event or with browser. It's occurring because you're using window.screen.width, which is relative to the screen, not to the browser window. It doesn't matter if you resize the browser window, the screen width will not change. For example, if your screen has resolution of 1900x1200, screen.width will always be 1900. Hence, you should use window.innerWidth, or just innerWidth to get the viewport width. To know more, see this question.

            Your code would be that way:

            Source https://stackoverflow.com/questions/68048189

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install litmus

            Check out the Litmus Docs to get started.

            Support

            Check out the Contributing Guidelines for the Chaos Hub.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/litmuschaos/litmus.git

          • CLI

            gh repo clone litmuschaos/litmus

          • sshUrl

            git@github.com:litmuschaos/litmus.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link