LockStep | LockStep Framework | Application Framework library
kandi X-RAY | LockStep Summary
kandi X-RAY | LockStep Summary
LockStep Framework
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of LockStep
LockStep Key Features
LockStep Examples and Code Snippets
Community Discussions
Trending Discussions on LockStep
QUESTION
Here's a playground link that reproduces the error: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=86ec4f11f407f5d04a8653cc904f991b
I have a trait FooTraitMut
that provides access to a specific range of data inside of BarStruct
, and I want to generalize this trait so that it can access the same range on multiple BarStruct
s in lockstep. So I have a MutChannels
trait that acts like a type-level function to produce the tuple of references that a visitor needs, e.g. (T, U) --> (&mut T, &mut U)
.
I haven't actually gotten to the point of using Channels2
because I can't get the simpler Channels1
case to work.
In the playground, the same is done for an immutable trait FooTraitRef
, which works as intended. But the mutable one is broken due to an autoref lifetime issue. I think some kind of implicit transformation is happening to the lifetime of self
, because I can inline the indexer
function and it works fine.
Any help would be greatly appreciated.
The code in question:
...ANSWER
Answered 2021-Feb-22 at 23:13This error can be reproduced by this example:
QUESTION
So Unity seems to have wrapped WebRTC in a neat package. This looks like good news, since they deprecated UNET without placing a counterbalance first. Whatever.
I now just so happen to have to implement multiplayer for some games, and since my company doesn't want to invest without having a first impression of how it will be received by gamers, I have to make do without a server to handle connections. So I stumbled on WebRTC, of which DataChannels seem to be perfect for my use case, since I will have to transmit a few bytes representing the game state (which is in lockstep, so no problem there).
However, for the life of me I can't understand how this thing works.
It looks like it exchanges addresses and other data via a google STUN server, does some offer\answer shenanigans, and thus the data channel is established. However I can't understand how it knows that 2 devices are the ones that need to be connected, and I can't understand why my code doesn't work. I made a class that connects local and remote peers, so they should be able to exhange data, right?
...ANSWER
Answered 2020-Nov-13 at 10:25Your logic looks largely correct to me. I don't know if it will fix your issue but to make things clearer I would adjust your SDP exchange so the description objects aren't overwritten.
QUESTION
I have a consumer that generated the first version of the Pact contract and it uploaded it to the broker. The producer verified that the contract and the verification were published to the broker.
Now I want to extend the contract. When I publish the updated contract to the broker and subsequently run the verification on the producer side, it fails since the contract-fulfilling API is not implemented yet. I'd like to update the contract first, publish it, and avoid breaking the producer build (i.e. not modifying the consumer and the producer in lockstep).
How can I version consumer/producer/contract so I can specify in the producer that it is currently compatible with a specific consumer/contract version?
I'm using Pact JVM/Java (version 3) with Maven. There is no project versioning in pom.xml
- it's just 1.0.0-SNAPSHOT
version. projectVersion
, as configured in the Pact Maven plugin is the same as Maven project version - 1.0.0-SNAPSHOT
.
Should I play with projectVersion
and tags
? Should I upgrade to Pact version 4 and use consumer version selectors?
ANSWER
Answered 2020-Oct-12 at 04:21So I think you're asking about how do effectively add Pact tests into your CI/CD pipeline and feature development workflow?
The first document explains the general approach, and (2) is a workshop you can follow to implement the steps (in JS). The principles are the same no matter what language you use (in your case Java).
Specifically, however, you will definitely need to use tags to prevent new feature tags from breaking your providers main build (e.g. featureA
created by a consumer, won't break the provider that only looks for production
and development
for example).
You may also want to look at pending pacts (see https://docs.pact.io/pending and the https://docs.pact.io/implementation_guides/jvm/provider/junit5/#pending-pact-support-version-410-and-later) which is a newer feature that prevents new contracts from breaking a provider.
Versioning
So you'll need to add more specific versions to your code to make effective use of Pact (and the workflows provided by the broker). You can specify this with the pact.provider.version
system property (e.g. System.setProperty("pact.provider.version", "some git sha");
)
We recommend using your revision control SHA in the version.
QUESTION
Apart from the __syncthreads()
function(s) which synchronizes the warps within a thread block, theres another function called __syncwarp()
. What exactly does this function do?
The cuda programming guide says,
will cause the executing thread to wait until all warp lanes named in mask have executed a __syncwarp() (with the same mask) before resuming execution. All non-exited threads named in mask must execute a corresponding __syncwarp() with the same mask, or the result is undefined.
Executing __syncwarp() guarantees memory ordering among threads participating in the barrier. Thus, threads within a warp that wish to communicate via memory can store to memory, execute __syncwarp(), and then safely read values stored by other threads in the warp.
So does this mean that this function ensures synchronization in threads within a warp that is included by the mask? If so, do we need such synchronization within the threads in the same warp since they all are ensured to be executed in lockstep?
...ANSWER
Answered 2017-Sep-29 at 01:03This feature is available on CUDA 9 and yes it synchronizes all threads within a warp and useful for divergent warps. This is useful for Volta architecture in which threads within a warp can be scheduled separately.
QUESTION
As far as I could gather from Wikipedia and the mindboggling HPE website, the claim to fame of the NonStop system architecture is that it can achieve a single-failure FT without having to allocate excessive amounts of spare capacity (i.e. in lockstepped architecture you would typically need to overprovision by 3x).
This seems a desirable property, yet I couldn't find more details about the approach they use and the caveats. I.e. what are the assumptions they make about the network, the kind of failures they tolerate, assumed client behavior, the acceptable time to recover, the workflows they run, etc.
Could anybody describe in brief how does the NonStop system solve the typical problems with failure detection and failure correction? Is it a generic magical solution on system level, or does it require that the applications are written to use certain transaction facilities and checkpoint data and communications?
Thanks a lot!
...ANSWER
Answered 2018-Feb-12 at 20:47i think it is similar to IBM architecture, shared nothing structure. Lots of redundancy, but nothing is shared or provisioned/dedicated -- based on my previous reading on IBM z/OS and mainframes.
Normally this type of system uses proprietary OS and modified kernel and special FS/driver to leverage the underlying hardware. In some cases, yes applications needs to be modified to leverage some special transaction libraries, but just like you need to have transaction locks for RDBMS when you scale it horizontally.
A lots of this HA/FT may be achieved in Kernel level, abstract away from applications.
Notice the chip in HPE Non-Stop systems, it is Itanium architecture, not regular Xeon chips. Just like IBM had its own proprietary enterprise class CPU for a while https://en.wikipedia.org/wiki/Z/Architecture
QUESTION
Please forgive me, I am new to programming and JavaScript/React...
This is the question from my assignment:
Make a counter application using React and Node.js. The user must have the ability to click a button to increase, decrease, or reset a counter. The app must have the following components: Display, DecreaseCount , IncreaseCount, ResetCount. Pass the appropriate functions to be used and current counter value to each component.
I'm not sure what the point is of creating components for those simple operations. I also don't understand what will make those arithmetical components unique if I'm passing them both a function and a value to work on. But I am assuming the point of the assignment is to show that you can pass state to a child, work on it within the child, and then pass the worked-on result back to the parent to be stored in its state.
Here is the main page, Display.js. For now I'm just trying to get the add functionality to work:
...ANSWER
Answered 2019-Dec-06 at 22:02You need to use this.props.count
within the IncreaseCount
QUESTION
I am learning how to use net logo and one of the things I am trying to do is to create a larger neighborhood then the built in 8 that comes with the agent set "neighbor".
I want to use this extended neighborhood to run Conway's Game of Life with more neighbors.
I have used the built in function from the Game of Life available in the netlogo's model library.
...ANSWER
Answered 2019-Sep-02 at 11:37NetLogo should tell you which line is giving you the error. Please include that in your future questions.
In this case, the error is (presumably) the line set live-neighbors count neighbors24 with [living?]
. Your problem is that with
selects those agents in the specified agentset that meet a condition. So patches with [pcolor = yellow]
would get the yellow patches. However, neighbors24 is not an agentset, it's a list of patch coordinates.
It is a common NetLogo novice mistake to create lists, particularly if you have experience with other programming languages. If you are creating lists of agent identifiers (eg coordinates for patches, or who
numbers for turtles) you almost certainly want an agentset instead.
The modified line let neighbors24 patches with [abs pxcor <= 2 and abs pycor <= 2]
will create neighbors24 as an agentset.
QUESTION
I have a number of Rust iterators specified by user input that I would like to iterate through in lockstep.
This sounds like a job for something like Iterator::zip
, except that I may need more than two iterators zipped together. I looked at itertools::multizip
and itertools::izip
, but those both require that the number of iterators to be zipped be known at compile time. For my task the number of iterators to be zipped together depends on user input, and thus cannot be known at compile time.
I was hoping for something like Python's zip
function which takes an iterable of iterables. I imagine the function signature might look like:
ANSWER
Answered 2019-Mar-22 at 02:51Implement your own iterator that iterates over the input iterators and collects them:
QUESTION
We're working on an RTS game engine using C# and .NET Core. Unlike most other real-time multiplayer games, RTS games tend to work by synchronizing player inputs to other players, and running the game simulation in lockstep on all clients at the same time. This requires game logic to be deterministic so that games don't get out of sync.
One potential source of non-determinism are floating point operations. From what I've gathered the primary issue is with the old x87 FPU instructions - they use an internal 80-bit register, while IEEE-754 floating point values are 32-bit or 64-bit, so values are truncated when moved from registers to memory. Small changes to code and/or the compiler can result in truncation happening at different times, resulting in slightly different results. Non-determinism can also be caused by accidentally using different FP rounding modes, though if I understood correctly this is mostly a solved issue.
I've also gotten the impression that SSE(2) instructions do not suffer from the truncation issue, as they perform all floating point arithmetic in 32- or 64-bit without a higher precision register.
Finally, as far as I know the CLR uses x87 FPU instructions on x86 (or that was at least the case before RyuJIT), and SSE instructions on x86-64. I'm not sure if that means for all or most operations.
Support for accurate single precision math has recently been added to .NET Core, if that matters.
But when researching whether or not floating point can be used deterministically in .NET there are a lot of answers that say no, although they mostly concern older versions of the runtime.
- In a StackOverflow answer from 2013 Eric Lippert said that if you want to guarantee reproducible arithmetic in .NET, you should "Use integers".
- In a is discussion about the subject on Roslyn's GitHub page a game developer said in a comment in 2017 that they were unable to reach repeatable floating point operations in C#, though he did not specify which runtime(s) they used.
- In a 2011 Game Development Stack Exchange answer the author concludes that he was unable to attain reliable FP arithmetic in .NET. He provides a software-based floating point implementation for .NET, which is binary compatible with IEEE754 floating point.
So, if CoreCLR uses SSE FP instructions on x86-64, does that mean that it doesn't suffer from the truncation issues, and/or any other FP-related non-determinism? We are shipping .NET Core with the engine so every client would use the same runtime, and we would require that the players use exactly the same version of the game client. Limiting the engine to only work on x86-64 (on PC) is also an acceptable limitation.
If the runtime still uses x87 instructions with unreliable results, would it make sense to use a software float implementation (like the one linked in an answer above) for computations concerning single values, and accelerate vector operations with SSE using the new hardware intrinsics? I've prototyped this and it seems to be work, but is it unnecessary?
If we can just use normal floating point operations, is there anything we should avoid, like trigonometric functions?
Finally, if everything is OK so far how would this work when different clients use different operating systems or even different CPU architectures? Do modern ARM CPUs suffer from the 80-bit truncation issue, or would the same code run identically to x86 (if we exclude trickier stuff like trigonometry), assuming the implementation has no bugs?
...ANSWER
Answered 2019-Jan-01 at 15:31So, if CoreCLR uses SSE FP instructions on x86-64, does that mean that it doesn't suffer from the truncation issues, and/or any other FP-related non-determinism?
If you stay on x86-64 and you use the exact same version of CoreCLR everywhere, it should be deterministic.
If the runtime still uses x87 instructions with unreliable results, would it make sense to use a software float implementation [...] I've prototyped this and it seems to be work, but is it unnecessary?
It could be a solution to workaround the JIT issue, but you will likely have to develop a Roslyn analyzer to make sure that you are not using floating point operations without going through these... or to write an IL rewriter that would perform this for you (but that would make your .NET assemblies arch dependent... which could be acceptable depending on your requirements)
If we can just use normal floating point operations, is there anything we should avoid, like trigonometric functions?
As far as I know, CoreCLR is redirecting math functions to the compiler libc, so as long as you stay on the same version, same platform, it should be fine.
Finally, if everything is OK so far how would this work when different clients use different operating systems or even different CPU architectures? Do modern ARM CPUs suffer from the 80-bit truncation issue, or would the same code run identically to x86 (if we exclude trickier stuff like trigonometry), assuming the implementation has no bugs?
You can have some issues not related to extra precision. For example, for ARMv7, subnormal floats are flushed to zero while ARMv8 on aarch64 will keep them.
So assuming that you are staying on ARMv8, I don't know well if the JIT CoreCLR for ARMv8 is behaving in that regard; you should probably ask on GitHub directly. There is still also the behavior of the libc that would likely break deterministic results.
We are working exactly at solving this at Unity on our "burst" compiler to translate .NET IL to native code. We are using LLVM codegen across all machines, disabling a few optimizations that could break determinism (so here, overall we can try to guarantee the behavior of the compiler across the platforms), and we are also using the SLEEF library to provide deterministic calculation of mathematical functions (see for example https://github.com/shibatch/sleef/issues/187)… so it is possible to do it.
In your position, I would probably try to investigate if CoreCLR is really deterministic for plain floating point operations between x64 and ARMv8… And if it looks okay, you could call these SLEEF functions instead of System.Math
and it could work out of the box, or propose CoreCLR to switch from libc to SLEEF.
QUESTION
Recently I've been studying common indexing structures in databases, such as B+-trees and LSM. I have a solid handle on how point reads/writes/deletes/compaction would work in an LSM.
For example (in RocksDB/levelDB), on a point query read we would first check an in-memory index (memtable), followed by some amount of SST files starting from most to least recent. On each level in the LSM we would use binary search to help speed up finding each SST file for the given key. For a given SST file, we can use bloom filters to quickly check if the key exists, saving us further time.
What I don't see is how a range read specifically works. Does the LSM have to open an iterator on every SST level (including the memtable), and iterate in lockstep across all levels, to return a final sorted result? Is it implemented as just a series of point queries (almost definitely not). Are all potential keys pulled first and then sorted afterwards? Would appreciate any insight someone has here.
I haven't been able to find much documentation on the subject, any insight would be helpful here.
...ANSWER
Answered 2019-Jan-23 at 09:47RocksDB has a variety of iterator implementations like Memtable Iterator, File Iterator, Merging Iterator, etc.
During range reads, the iterator will seek to the start range similar to point lookup (using Binary search with in SSTs) using SeekTo()
call. After seeking to start range, there will be series of iterators created one for each memtable, one for each Level-0 files (because of overlapping nature of SSTs in L0) and one for each level later on. A merging iterator will collect keys from each of these iterators and gives the data in sorted order till the End range is reached.
Refer to this documentation on iterator implementation.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install LockStep
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page