dcache | retrieving huge amounts of data | Storage library

 by   dCache Java Version: 9.0.6 License: No License

kandi X-RAY | dcache Summary

kandi X-RAY | dcache Summary

dcache is a Java library typically used in Storage applications. dcache has build file available and it has low support. However dcache has 958 bugs and it has 9 vulnerabilities. You can download it from GitHub.

. dCache is a system for storing and retrieving huge amounts of data, distributed among a large number of heterogeneous server nodes, under a single virtual filesystem tree with a variety of standard access methods. Depending on the Persistency Model, dCache provides methods for exchanging data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, the cache simulates unlimited direct access storage space. Data exchanges to and from the underlying HSM are performed automatically and invisibly to the user. Beside HEP specific protocols, data in dCache can be accessed via NFSv4.1 (pNFS), FTP as well as through WebDav.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              dcache has a low active ecosystem.
              It has 248 star(s) with 127 fork(s). There are 26 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 296 open issues and 511 have been closed. On average issues are closed in 83 days. There are 9 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of dcache is 9.0.6

            kandi-Quality Quality

              OutlinedDot
              dcache has 958 bugs (62 blocker, 23 critical, 658 major, 215 minor) and 15970 code smells.

            kandi-Security Security

              dcache has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              OutlinedDot
              dcache code analysis shows 9 unresolved vulnerabilities (6 blocker, 1 critical, 2 major, 0 minor).
              There are 356 security hotspots that need review.

            kandi-License License

              dcache does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              dcache releases are not available. You will need to build from source code and install.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed dcache and discovered the below as its top functions. This is intended to give you an instant insight into dcache implemented functionality, and help decide if they suit your requirements.
            • Creates SRM client .
            • State engine .
            • Handle the verification .
            • Handles an open request .
            • Handle the write operation .
            • Validate this PNFS state .
            • Modify a user .
            • Gets the URLs of all files in the SRM file .
            • Generate HTML for all pools .
            • Returns true if the user is allowed to access the given user .
            Get all kandi verified functions for this library.

            dcache Key Features

            No Key Features are available at this moment for dcache.

            dcache Examples and Code Snippets

            No Code Snippets are available at this moment for dcache.

            Community Discussions

            QUESTION

            Is there a way to profile a MPI program with detailed cache/CPU efficiency information?
            Asked 2021-May-06 at 18:23

            OS: Ubuntu 18.04 Question: How to profile a multi-process program?

            I usually use GNU perf tool to profile a program as follows: perf stat -d ./main [args], and this command will return a detailed performance counter as follows:

            ...

            ANSWER

            Answered 2021-May-06 at 18:23

            Basic profilers like gperf or gprof don't work well with MPI programs, but there are many profiling tools specifically designed to work with MPI that collect and report data for each MPI rank. Virtually all of them can collect hardware performance counters for cache misses. Here are a few options:

            • HPCToolkit for sampling-based profiling. Works on unmodified binaries.
            • TAU and Score-P provide instrumentation-based profiling. Usually requires recompiling.
            • TiMemory and Caliper let you mark code regions to measure. TiMemory also has scripts for roofline analysis etc.

            Decent HPC centers typically have one or more of them installed. Refer to the manuals to learn how to gather hardware counters.

            Source https://stackoverflow.com/questions/67419770

            QUESTION

            C memcpy crashing at run time
            Asked 2021-Mar-11 at 12:38

            i have this issue. Whenever i try to call StorageStore it crashes on run time. I have no idea how to fix it. I have tried googling but iam kinda inexperienced about pointers. Thanks in advance.

            Edit: i compile with gcc -Ofast

            ...

            ANSWER

            Answered 2021-Mar-11 at 12:38

            After googling about what uninitialised pointer is i realized my issue

            thank you alk, Paul Hankin and Jiri Volejnik for your answers

            i added these line to fix it

            Source https://stackoverflow.com/questions/66581890

            QUESTION

            PMC to count if software prefetch hit L1 cache
            Asked 2021-Jan-18 at 07:39

            I am trying to find a PMC (Performance Monitoring Counter) that will display the amount of times that a prefetcht0 instruction hits L1 dcache (or misses).

            icelake-client: Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz

            I am trying to make this fine grain i.e (note should include lfence around prefetcht0)

            ...

            ANSWER

            Answered 2021-Jan-18 at 03:59

            The rdpmc is not ordered with the events that may occur before it or after it in program order. A fully serializing instruction, such as cpuid, is required to obtain the desired ordering guarantees with respect to prefetcht0. The code should be as follows:

            Source https://stackoverflow.com/questions/65745752

            QUESTION

            Cache miss latency in clock cycles
            Asked 2021-Jan-13 at 07:46

            To measure the impact of cache-misses in a program, I want to latency caused by cache-misses to the cycles used for actual computation. I use perf stat to measure the cycles, L1-loads, L1-misses, LLC-loads and LLC-misses in my program. Here is a example output:

            ...

            ANSWER

            Answered 2021-Jan-13 at 07:46

            Out-of-order exec and memory-level parallelism exist to hide some of that latency by overlapping useful work with time data is in flight. If you simply multiplied L3 miss count by say 300 cycles each, that could exceed the total number of cycles your whole program took. The perf event cycle_activity.stalls_l3_miss (which exists on my Skylake CPU) should count cycles when no uops execute and there's an outstanding L3 cache miss. i.e. cycles when execution is fully stalled. But there will also be cycles with some work, but less than without a cache miss, and that's harder to evaluate.

            TL:DR: memory access is heavily pipelined; the whole core doesn't stop on one cache miss, that's the whole point. A pointer-chasing benchmark (to measure latency) is merely a worst case, where the only work is dereferencing a load result. See Modern Microprocessors A 90-Minute Guide! which has a section about memory and the "memory wall". See also https://agner.org/optimize/ and https://www.realworldtech.com/haswell-cpu/ to learn more about the details of out-of-order exec CPUs and how they can continue doing independent work while one instruction is waiting for data from a cache miss, up to the limit of their out-of-order window size. (https://blog.stuffedcow.net/2013/05/measuring-rob-capacity/)

            Re: numbers from vendors:

            L3 and RAM latencies aren't a fixed number of core clock cycles: first, core frequency is variable (and independent of uncore and memory clocks), and second because of contention from other cores, and number of hops over the interconnect. (Related: Is cycle count itself reliable on program timing? discusses some effects of core frequency independent of L3 and memory)

            That said, Intel's optimization manual does include a table with exact latencies for L1 and L2, and typical for L3, and DRAM on Skylake-server. (2.2.1.3 Skylake Server Microarchitecture Cache Recommendations) https://software.intel.com/content/www/us/en/develop/articles/intel-sdm.html#optimization - they say SKX L3 latency is typically 50-70 cycles. DRAM speed depends some on the timing of your DIMMs.

            Other people have tested specific CPUs, like https://www.7-cpu.com/cpu/Skylake.html.

            Source https://stackoverflow.com/questions/65691407

            QUESTION

            DMA writing to allocated memory misses the first two adresses on the first write
            Asked 2020-Dec-11 at 07:58

            I'm playing around with a ZYNQ7 (Dual-Core ARM) with a FPGA. The FPGA design has a 32-bit counter accessing the DDR via a DMA controller in chunks of 256-packets.

            In the C-Code for the processor 1, I run a LWIP application to connect via ethernet to my pc. There I allocate ram memory for the DMA transactions. The address of the pointer is passed via shared memory to the 2nd Core.

            ...

            ANSWER

            Answered 2020-Dec-11 at 07:58

            I found a solution....

            I had to flush the Cache after allocating the memory, before passing the address to the 2nd Core for processing.

            Source https://stackoverflow.com/questions/65247527

            QUESTION

            Using linux perf tool to measure the amount of times the CPU has to acccess the main memory
            Asked 2020-Nov-20 at 20:26

            As I understand, the perf tool can read hardware counters available on a processor to provide performance information. For example, I know to use L1-dcache-load-misses to measure the number of times the L1 cache does not have the requested data.

            I want to find out how many times my CPU, when running my program, has to access the DRAM. Using perf list | grep dram throws up hundreds of counters of which I cannot find any information.

            So, which event to use to measure the number of times DRAM has been accessed?

            ...

            ANSWER

            Answered 2020-Nov-20 at 18:32

            (This doesn't fully answer your question, hopefully someone else with more memory-profiling experience will answer. The events I mentions are present on Skylake-client; IDK about other CPUs.)

            On a CPU without L4 eDRAM cache, you can count L3 misses. e.g. mem_load_retired.l3_miss for loads. (But that might count 2 loads to the same line as two separate misses, even though they both wait for the same LFB to fill, so actually just one access seen by the DRAM.)

            And it won't count DRAM access driven by HW prefetch. Also, that's only counting loads, not write-backs of dirty data after stores.

            The offcore_response events are super complex because they consider the possibility of multi-socket systems and snooping other sockets, local vs. remote RAM, and so on. Not sure if there's one single event with dram in its name that does what you want. Also, the offcore_response events divide up between demand_code_rd, demand_data_rd, demand_rfo (store misses), and other.

            There is offcore_requests.l3_miss_demand_data_rd to count demand-load (non prefetch)

            Source https://stackoverflow.com/questions/64923795

            QUESTION

            Could one explain significant performance difference in char-by-char iteration over j.l.String?
            Asked 2020-Aug-13 at 09:48

            I've tried two ways to iterate char-by-char over java.lang.String and found them confusing. The benchmark summarizes it:

            ...

            ANSWER

            Answered 2020-Aug-13 at 09:48

            As @apangin mentioned in his comment

            The problem is that BlackHole.consume is called inside the loop. Being a non-inlined black box method, it prevents from optimizing the code around the call, in particular, caching String fields.

            Source https://stackoverflow.com/questions/63338861

            QUESTION

            Perf shows L1-dcache-load-misses in a block with no memory access
            Asked 2020-Aug-06 at 17:03

            Below is a block of code that perf record flags as responsible for 10% of all L1-dcache misses, but the block is entirely movement between zmm registers. This is the perf command string:

            ...

            ANSWER

            Answered 2020-Aug-06 at 17:03

            The event L1-dcache-load-misses is mapped to L1D.REPLACEMENT on Sandy Bridge and later microarchitectures (or mapped to a similar event on older microarchitectures). This event doesn't support precise sampling, which means that a sample can point to an instruction that couldn't have generated the event being sampled on. (Note that L1-dcache-load-misses is not supported on any current Atom.)

            Starting with Linux 3.11 running on a Haswell+ or Silvermont+ microarchitecture, samples can be captured with eventing instruction pointers by specifying a sampling event that meets the following two conditions:

            • The event supports precise sampling. You can use, for example, any of the events that represent memory uop or instruction retirement. The exact names and meaning of the events depends on the microarchtiecture. Refer to the Intel SDM Volume 3 for more information. There is no event that supports precise sampling and has the same exact meaning as L1D.REPLACEMENT. On processors that support Extended PEBS, only a subset of PEBS events support precise sampling.
            • The precise sampling level is enabled on the event. In Linux perf, this can be done by appending ":pp" to the event name or raw event encoding, or "pp" after the terminating slash of a raw event specified in the PMU syntax. For example, on Haswell, the event mem_load_uops_retired.l1_miss:pp can be specified to Linux perf.

            With such an event, when the event counter overflows, the PEBS hardware is armed, which means that it's now looking for the earliest possible opportunity to collect a precise sample. When there is at least one instruction that will cause an event during this window of time, the PEBS hardware will be eventually triggered by one of these instructions with bias toward high-latency instructions. When the instruction that triggeres PEBS retires, the PEBS microcode routine will execute and captures a PEBS record, which contains among other things the IP of the instruction that triggered PEBS (which is different from the architectural IP). The instruction pointer (IP) used by perf to display the results is this eventing IP. (I noticed there can be a negligible number of samples pointing to instructions that couldn't have caused the event.)

            On older mircroarchitecures (before Haswell and Silvermont), the "pp" precise sampling level is also supported. PEBS on these processors will only capture the architectural event, which points to the static instruction that immediately follows the PEBS triggering instruction in program order. Linux perf uses LBR, if possible, which contains source-target IP pairs to determine if that captured IP is a target of a jump. If that was the case, it will add the source IP as the eventing IP to the sample record.

            Some microarchitectures support one or more events with better sampling distribution (how much better depends on the microarchitecture, the event, the counter, and the instructions being executed at the time in which the counter is about to overflow). In Linux perf, precise distribution can be enabled, if supported, by specifying the precise level "ppp."

            Source https://stackoverflow.com/questions/63251365

            QUESTION

            Am I correctly reasoning about cache performance?
            Asked 2020-Aug-03 at 18:12

            I'm trying to solidify my understanding of data contention and have come up with the following minimal test program. It runs a thread that does some data crunching, and spinlocks on an atomic bool until the thread is done.

            ...

            ANSWER

            Answered 2020-Aug-02 at 22:05

            An instance of type SideProcessor has the following fields:

            Source https://stackoverflow.com/questions/63196854

            QUESTION

            Why won't perf report "dcache-store-misses"?
            Asked 2020-Jul-14 at 16:42

            I am using perf to collect some metrics about my code, and I am running the following command:

            ...

            ANSWER

            Answered 2020-Jul-14 at 16:42

            Perf prints for generic events which were requested by user or by default event set (in perf stat) which are not mapped to real hardware PMU events on current hardware. Your hardware have no exact match to L1-dcache-store-misses generic event so perf informs you that your request sudo perf stat -e L1-dcache-load-misses,L1-dcache-store-misses ./progB can't be fully implemented on current machine.

            Your cpu is "Product formerly Kaby Lake" which has skylake PMU according to linux kernel file arch/x86/events/intel/core.c:

            Source https://stackoverflow.com/questions/62821668

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install dcache

            The file [BUILDING.md](BUILDING.md) describes how to compile dCache code and build various packages. The file also describes how to create the system-test deployment, which provides a quick and easy way to get a working dCache. Running system-test requires no special privileges and all the generated files reside within the code-base. There are also packages of stable releases at https://www.dcache.org/downloads/1.9/.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/dCache/dcache.git

          • CLI

            gh repo clone dCache/dcache

          • sshUrl

            git@github.com:dCache/dcache.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Storage Libraries

            localForage

            by localForage

            seaweedfs

            by chrislusf

            Cloudreve

            by cloudreve

            store.js

            by marcuswestin

            go-ipfs

            by ipfs

            Try Top Libraries by dCache

            nfs4j

            by dCacheJava

            oncrpc4j

            by dCacheJava

            hadoop-nfs

            by dCacheJava

            xrootd4j

            by dCacheJava

            dcap

            by dCacheC