LargeData | To send and recieve large data sets from REST APIs

 by   lokeshlal C# Version: Current License: No License

kandi X-RAY | LargeData Summary

kandi X-RAY | LargeData Summary

LargeData is a C# library. LargeData has no bugs, it has no vulnerabilities and it has low support. You can download it from GitHub.

Recently I was working with a problem, where we have to transfer 5 GB (maximum or first load) of data set across client (WPF client) and server (Web APIs) (both download and upload). All this need to happen over vpn connection and users are across the world with a varying internet speed (which sometimes could be as low as 16 kbps). Initial implementation was to leverage the existing sync frameworks to synchronize the data and let the framework handle all the changes done by the user in the database. However, the sync framework was making things very slow because of database triggers (impacting inserts and updates, as in one transaction a maximum of 100K records could be inserted or updated). Time taken by sync framework was approximately ~30 minutes to sync the entire data from server to client and >1 hr to sync back from client to server. To solve this performance bottleneck, first thing is to get rid of sync framework. but then again how do we keep track of changes without much changes in existing code (as everything else is working just fine except the data synchronization). This part is not covered in the above library, that is, how to keep track of what changed. So we already have a "rowversion" column in all the tables (we were using SQL server), which is always incremental across the DB. So to keep track of childs, what we need to do is to add a LastSyncedOn column in the parent table (for all the child table) and set LastSyncedOn to @DBTS. which means last time database was synced on that this timestamp and if any child table rowversion is greater than this value, then it a new row (either inserted or updated). On the basis of this we created the DataSet, both at client and server. Feed the dataset to the above utility and send it across. After doing these changes, we were able to download the same data set in <5 minutes and upload in <10 minutes (depend upon the changes done in the data at the client side). At some places, we also ended up using datareader, as dataset was giving out of memory exception. To include LargeData controller, please add following line in the Global file Application_Start() event. Staging location, where temporary files will be stored. Use filter class to send additional information about upload and download. Filters can be used in the Callbacks to retrieve the data and to process the data. Filter can have following (but not limited to). LargeData is a single assembly containing code for both client as well server. Client directly references the assembly and uses LargeData.Client.LargeData class to send and recieve the datasets. Server also references the assembly and registers the LargeData controller in the Global class (as mentioned in above section "Server Configuration"). LargeData provides output in 2 format,. REST APIs will have to provide a method to handle all incoming upload and download request. This will be a single method for all sort of upload and download request. So to handle various scenarios, this method also accept List (as mentioned in section "Filter class") to differentiate between different type of request. Please note, background worker process is running in ASP.NET context, this process can be triggered via Hangfire, Quartz or a separate windows service could also be created (based upon the requirements, in my original code, I have written a windows service for the background process, however I feel hangfire is a good fit). Please have a look at the Client and WebApi project on how to use this library. Client project contains code of how to trigger upload and download request. WebApi project contains how to process upload and download request.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              LargeData has a low active ecosystem.
              It has 4 star(s) with 0 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              LargeData has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of LargeData is current.

            kandi-Quality Quality

              LargeData has no bugs reported.

            kandi-Security Security

              LargeData has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              LargeData does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              LargeData releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of LargeData
            Get all kandi verified functions for this library.

            LargeData Key Features

            No Key Features are available at this moment for LargeData.

            LargeData Examples and Code Snippets

            No Code Snippets are available at this moment for LargeData.

            Community Discussions

            QUESTION

            Dealing with outliers in Pandas
            Asked 2021-Jan-02 at 23:08

            Good day. The problem is the following - when trying to remove outliers from one of the columns in the table

            ...

            ANSWER

            Answered 2021-Jan-02 at 23:08

            Your zscore is computed over only 1 column, so the result is a one-dimensional array

            Source https://stackoverflow.com/questions/65544887

            QUESTION

            How to unpack results from `Pool.map()`?
            Asked 2019-Aug-08 at 00:21

            I've got a function (preprocess_fit) that will first preprocess a data set (i.e. smoothing, baseline correction, and filters out bad data). The function then takes an initial guess for a parameter and then iterates through guesses to find the optimised fit and then returns const1, const2. The function also calculates a bunch of other parameters but they are not returned in this instance.

            I then need to loop this function over all files in a directory (~1000 files). I do this by using the second function (function) that contains a for loop. The preprocessing step, in particular the guess iteration is particularly time consuming.

            I'd like to pool the function (function) using the multiprocessing module and unpack the constants and then append to a list. The try: except: is included as some files are missing metadata and the preprocess_fit function fails and I'd like a nan value to be appended to the list when this occurs.

            Issues: 1) Pool cannot unpack function 2) If I only return a const1 from function(files) the processes are appended to the list and not the outputs.

            Any suggestion would be great.

            ...

            ANSWER

            Answered 2019-Feb-07 at 18:12

            When your function is returning multiple items, you will get a list of result-tuples from your pool.map() call. const1 would need all first items in these tuples, const2 all second items in these tuples. That's a job for the zip builtin-function, which returns an iterator that aggregates elements from each of the iterables passed as arguments.

            You have to unpack the list so the result-tuples are the arguments for the zip function. Then unpack the iterator by assigning to multiple variables:

            Source https://stackoverflow.com/questions/54575163

            QUESTION

            newton json default string value not working?
            Asked 2019-Jun-27 at 08:43

            When I seralise an object, for some string properties, I would like to output empty string other than ignore or output null.

            According to newton's doc, I could do this:

            ...

            ANSWER

            Answered 2019-Jun-27 at 08:42

            Looking at DefaultValueHandling it doesn't look like there's any way of doing what you want.

            The default value attribute is only used when deserializing, if the property isn't specified in the JSON. The ignore / include choices are the ones which are relevant when serializing, and those don't affect the value that's serialized - just whether or not it should be serialized.

            Unless you've got code which actually sets the value to null, the simplest option would be to make the property default to "" from a .NET perspective:

            Source https://stackoverflow.com/questions/56787035

            QUESTION

            How to selectively deep-copy in python?
            Asked 2019-Jun-06 at 13:15

            I have multiple python classes. Objects of one of the classes (say, class1)has a large amount of data in it (it will not be altered during runtime). Another class (say class2) has a member variable which maps to one of the objects of class1. Assume class1 and class2 have other both mutable and immutable member variables.

            Now i want too do a deepcopy of an object of class2. It will also make a deepcopy of the class1 object in class2. but to save memory, i want to avoid that. how do i do this?

            ...

            ANSWER

            Answered 2019-Jun-06 at 13:15

            Use the __deepcopy__ hook method to customize how your objects are deep-copied.

            Source https://stackoverflow.com/questions/56478210

            QUESTION

            Lifetime issue with generic trait and slice::sort_by
            Asked 2018-Oct-31 at 16:08

            As a learning exercise, I've been writing a sorting library and I'm running into a roadblock. I've defined a trait ExtractFrom to extract a sortable key from items in a slice (to do the equivalent of what sort_by_key would do). I would like to be able to extract a key that borrows data, but my attempts to implement that have failed.

            Here is a reduced example that demonstrates what I've attempted. LargeData is what is contained within the slice, and I've defined LargeDataKey that contains references to the subset of the data I want to sort by. This is running into lifetime issues between the extract_from implementation and what sort_by expects, but I don't know how to fix it. Any explanation or suggestions on how to best accomplish this would be appreciated.

            ...

            ANSWER

            Answered 2018-Oct-31 at 16:07

            The compiler error clearly states that there are two lifetimes at play here:

            Source https://stackoverflow.com/questions/53072850

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install LargeData

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/lokeshlal/LargeData.git

          • CLI

            gh repo clone lokeshlal/LargeData

          • sshUrl

            git@github.com:lokeshlal/LargeData.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link