caching | framework library for wrapping .NET cache access | Caching library
kandi X-RAY | caching Summary
kandi X-RAY | caching Summary
A C# framework library for wrapping .NET cache access, including the MemoryCache, AppFabric Cache, Memcached and a disk cache.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of caching
caching Key Features
caching Examples and Code Snippets
Community Discussions
Trending Discussions on caching
QUESTION
I have been trying out an open-sourced personal AI assistant script. The script works fine but I want to create an executable so that I can gift the executable to one of my friends. However, when I try to create the executable using the auto-py-to-exe, it states the below error:
...ANSWER
Answered 2021-Nov-05 at 02:2042681 INFO: PyInstaller: 4.6
42690 INFO: Python: 3.10.0
QUESTION
We want to display customer (actually customer-group) specific information on product detail pages in Shopware 6.
There seems to be the HTTP cache and we are afraid that the page would be cached if a specific customer group displays the page and the information would be leaked to non-customers.
Is this assumption correct?
The documentation does not reveal much information about this.
Is there a way to set specific cache tags, so that the information is only displayed to the correct customer group?
Or do we need to fetch the data dynamically via AJAX?
Bonus question: Can the HTTP cache be simulated in automatic tests to ensure the functionality works?
What I found out so far:
The is annotation
@httpCache
for controller, which seems to control whether a page is cached or notThe cache key is generated in
\Shopware\Storefront\Framework\Cache\HttpCacheKeyGenerator::generate
. It take the full request URI into account, and somecacheHash
which is injected. I believe it would not take the customer group into accountMaybe this
generate()
method could be decorated, but I am not sure if that is the right way.There is a cookie being set
sw-cache-hash
which influences the caching. It takes the customer into account.
...sw-cache-hash
is created here:
ANSWER
Answered 2022-Jan-28 at 10:51As you can see in the last code snippet, it takes into account the active Rule ids. This means that if you create a rule (through Settings > Rule Builder) that is active on a certain group, but not on another or no group, it will be taken into account & create a different cache hash for the different customer groups.
QUESTION
After migrating from Remark to MDX, my builds on Netlify are failing.
I get this error when trying to build:
...ANSWER
Answered 2022-Jan-08 at 07:21The problem is that you have Node 17.2.0. locally but in Netlify's environment, you are running a lower version (by default it's not set as 17.2.0). So the local environment is OK, Netlify environment is KO because of this mismatch of Node versions.
When Netlify deploys your site it installs and builds again your site so you should ensure that both environments work under the same conditions. Otherwise, both node_modules
will differ so your application will have different behavior or eventually won't even build because of dependency errors.
You can easily play with the Node version in multiple ways but I'd recommend using the .nvmrc
file. Just run the following command in the root of your project:
QUESTION
Hi i was deploying a branch on heroku and threw up this error. I also tried deploying a branch which worked perfectly, but that is also showing the same error.
local yarn verion : 1.22.17 local node version : v12.22.7 Please help !!!
Tried building without yarn.lock and package-lock same thing.
This is how it starts Heroku deployment build log through CLI
...ANSWER
Answered 2021-Dec-18 at 14:32I had a similar problem but resolved by following steps.
- Run the following command.
heroku buildpacks:add heroku/nodejs --index 1
- Update node version from
16.x
to12.16.2
in package.json.
QUESTION
For the sake of example let's define a toy automaton type:
...ANSWER
Answered 2021-Dec-24 at 00:37Laziness to the rescue. We can recursively define a list of all the sub-automata, such that their transitions index into that same list:
QUESTION
I'm trying to deploy my nuxt js project to netlify. The installation part works fine, But it returns an error in the build process.
I tried to search google but I can't find any solution to this problem.
I also tried this command CI= npm run generate
ANSWER
Answered 2021-Nov-11 at 15:42I just had the same problem and solved it thanks to this question. The problem seems to be fibers.
The steps I took to fix it:
- uninstall fibers:
npm uninstall fibers
- delete
package-lock.json
&node_modules/
- install packages again:
npm install
Simply removing fibers from package.json
isn't enough as Netlify seems to still find the package in package-lock.json
.
QUESTION
As of this morning, CircleCI is failing for me with this strange build error:
...ANSWER
Answered 2021-Nov-08 at 09:06Try using a next-generation Ruby image. In your case, change circleci/ruby:2.7.4-node-browsers
to cimg/ruby:2.7.4-browsers
. You can find the full list of images here.
QUESTION
I have a React app that I implemented PWA with, I want to change the caching strategy to network first but I have no idea how to do so, I have read many articles about it but none of them tells you how to do it actually, this is my code below and I appreciate any help with it:
index.js
:
ANSWER
Answered 2021-Oct-30 at 21:41the solution to my problem was answered in this article about all PWA strategies: https://jakearchibald.com/2014/offline-cookbook/#network-falling-back-to-cache
and what I had to do was add this piece of code to the end of my service-worker.js
file:
QUESTION
First of all, I've tried all recommendations from C# DNS-related SO threads and other internet articles - messing with ServicePointManager/ServicePoint settings, setting automatic request connection close via HTTP headers, changing connection lease times - nothing helped. It seems like all those settings are intended for fixing DNS issues in long-running processes (like web services). It even makes sense if a process would have it's own DNS cache to minimize DNS queries or OS DNS cache reading. But it's not my case.
The problemOur production infrastructure uses HA (high availability) DNS for swapping server nodes during maintenance or functional problems. And it's built in a way that in some places we have multiple CNAME-records which in fact point to the same HA A-record like that:
- eu.site1.myprodserver.com (CNAME) > eu.ha.myprodserver.com (A)
- eu.site2.myprodserver.com (CNAME) > eu.ha.myprodserver.com (A)
The TTL of all these records is 60 seconds. So when the European node is in trouble or maintenance, the A-record switches to the IP address of some other node.
Then we have a monitoring utility which is executed once in 5 minutes and uses both site1 and site2. For it to work properly both names must point to the same DC, because data sync between DCs doesn't happen that fast. Since both CNAMEs are in fact linked to the same A-record with short TTL at a first glance it seems like nothing can go wrong. But it turns out it can.
The utility is written in C# for .NET Framework 4.7.2 and uses HttpClient class for performing requests to both sites. Yeah, it's him again.
We have noticed that when a server node switch occurs the utility often starts acting as if site1 and site2 were in different DCs. The pattern of its behavior in such moments is strictly determined, so it's not like it gets confused somewhere in the middle of the process - it incorrecly resolves one or both of these addresses from the very start.
I've made another much simpler utility which just sends one GET-request to site1 and then started intentionally switching nodes on and off and running this utility to see which DC would serve its request. And the results were very frustrating.
Despite the Windows DNS cache already being updated (checked via ipconfig
and Get-DnsClientCache
cmdlet) and despite the overall records' TTL of 60 seconds the HttpClient keeps sending requests to the old IP address sometimes for another 15-20 minutes. Even when I've completely shut down the "outdated" application server - the utility kept trying to connect to it, so even connection failures don't wake it up.
It becomes even more frustrating if you start running ipconfig /flushdns
in between utility runs. Sometimes after flushdns the utility realizes that the IP has changed. But as soon as you make another flushdns (or this is even not needed - I haven't 100% clearly figured this out) and run the utility again - it goes back to the old address! Unbelievable!
And add even more frustration. If you resolve the IP address from within the same utility using Dns.GetHostEntry method (which uses cache as per this comment) right before calling HttpClient, the resolve result would be correct... But the HttpClient would anyway make a connection to an IP address of seemengly his own independent choice. So HttpClient somehow does not seem to rely on built-in .NET Framework DNS resolving.
So the questions are:
- Where does a newly created .NET Framework process take those cached DNS results from?
- Even if there is some kind of a mystical global .NET-specific DNS cache, then why does it absolutely ignore TTL?
- How is it possible at all that it reverts to the outdated old IP address after it has already once "understood" that the address has changed?
P.S. I have worked this all around by implementing a custom HttpClientHandler which performs DNS queries on each hostname's first usage thus it's independent from external DNS caches (except for caching at intermediate DNS servers which also affects things to some extent). But that was a little tricky in terms of TLS certificates validation and the final solution does not seem to be production ready - but we use it for monitoring only so for us it's OK. If anyone is interested in this, I'll show the class code which somewhat resembles this answer's example.
Update 2021-10-08The utility works from behind a corporate proxy. In fact there are multiple proxies for load balancing. So I am now also in process of verifying this:
- If the DNS resolving is performed by the proxies and they don't respect the TTL or if they cache (keep alive) TCP connections by hostnames - this would explain the whole problem
- If it's possible that different proxies handle HTTP requests on different runs of the utility - this would answer the most frustrating question #3
The answer to "Does .NET Framework has an OS-independent global DNS cache?" is NO. HttpClient class or .NET Framework in general had nothing to do with all of this. Posted my investigation results as an accepted answer.
...ANSWER
Answered 2021-Oct-14 at 21:32HttpClient, please forgive me! It was not your fault!
Well, this investigation was huge. And I'll have to split the answer into two parts since there turned out to be two unconnected problems.
1. The proxy server problemAs I said, the utility was being tested from behind a corporate proxy. In case if you haven't known (like I haven't till the latest days) when using a proxy server it's not your machine performing DNS queries - it's the proxy server doing this for you.
I've made some measurements to understand for how long does the utility keep connecting to the wrong DC after the DNS record switch. And the answer was the fantastic exact 30 minutes. This experiment has also clearly shown that local Windows DNS cache has nothing to do with it: those 30 minutes were starting exactly at the point when the proxy server was waking up (was finally starting to send HTTP requests to the right DC).
The exact number of 30 minutes has helped one of our administrators to finally figure out that the proxy servers have a configuration parameter of minimal DNS TTL which is set to 1800 seconds by default. So the proxies have their own DNS cache. These are hardware Cisco proxies and the admin has also noted that this parameter is "hidden quite deeply" and is not even mentioned in the user manual.
As soon as the minimal proxies' DNS TTL was changed from 1800 seconds to 1 second (yeah, admins have no mercy) the issue stopped reproducing on my machine.
But what about "forgetting" the just-understood correct IP address and falling back to the old one?Well. As I also said there are several proxies. There is a single corporate proxy DNS name, but if you run nslookup
for it - it shows multiple IPs behind it. Each time the proxy server's IP address is resolved (for example when local cache expires) - there's quite a bit of a chance that you'll jump onto another proxy server.
And that's exactly what ipconfig /flushdns
has been doing to me. As soon as I started playing around with proxy servers using their direct IP addresses instead of their common DNS name I found that different proxies may easily route identical requests to different DCs. That's because some of them have those 30-minutes-cached DNS records while others have to perform resolving.
Unfortunately, after the proxies theory has been proven, another news came in: the production monitoring servers are placed outside of the corporate network and they do not use any proxy servers. So here we go...
2. The short TTL and public DNS servers problemThe monitoring servers are configured to use 8.8.8.8 and 8.8.4.4 Google's DNS servers. Resolve responses for our short-lived DNS records from these servers are somewhat weird:
- The returned TTL of CNAME records swings at around 1 hour mark. It gradually decreases for several minutes and then jumps back to 3600 seconds - and so on.
- The returned TTL of the root A-record is almost always exactly 60 seconds. I was occasionally receiving various numbers less than 60 but there was no any obvious humanly-percievable logic. So it seems like these IP addresses in fact point to balancers that distribute requests between multiple similar DNS servers which are not synced with each other (and each of them has its own cache).
Windows is not stupid and according to my experiments it doesn't care about CNAME's TTL and only cares about the root A-record TTL, so its client cache even for CNAME records is never assigned a TTL higher than 60 seconds.
But due to the inconsistency (or in some sense over-consistency?) of the A-record TTL which Google's servers return (unpredictable 0-60 seconds) the Windows local cache gets confused. There were two facts which demonstrated it:
- Multiple calls to
Resolve-DnsName
for site1 and site2 over several minutes with random pauses between them have eventually led toGet-ClientDnsCache
showing the local cache TTLs of the two site names diverged on up to 15 seconds. This is a big enough difference to sometimes mess the things up. And that's just my short experiment, so I'm quite sure that it might actually get bigger. - Executing
Invoke-WebRequest
to each of the sites one right after another once in every 3-5 seconds while switching the DNS records has let me twicely face a situation when the requests went to different DCs.
Conclusion?The latter experiment had one strange detail I can't explain. Calling
Get-DnsClientCache
afterInvoke-WebRequest
shows no records appear in the local cache for the just-requested site names. But anyway the problem clearly has been reproduced.
It would take time to see whether my workaround with real-time DNS resolving would bring any improvement. Unfortunately, I don't believe it will - the DNS servers used at production (which would eventually be used by the monitoring utility for real-time IP resolving) are public Google DNS which are not reliable in my case.
And one thing which is worse than an intermittently failing monitoring utility is that real-world users are also relying on public DNS servers and they definitely do face problems during our maintenance works or significant failures.
So have we learned anything out of all this?
- Maybe a short DNS TTL is generally a bad practice?
- Maybe we should install additional routers, assign them static IPs, attach the DNS names to them and then route traffic internally between our DCs to finally stop relying on DNS records changing?
- Or maybe public DNS servers are doing a bad job?
- Or maybe the technological singularity is closer than we think?
I have no idea. But its quite possible that "yes" is the right answer to all of these questions.
However there is one thing we surely have learned: network hardware manufacturers shall write their documentation better.
QUESTION
I'm working on a pyspark routine to interpolate the missing values in a configuration table.
Imagine a table of configuration values that go from 0 to 50,000. The user specifies a few data points in between (say at 0, 50, 100, 500, 2000, 500000) and we interpolate the remainder. My solution mostly follows this blog post quite closely, except I'm not using any UDFs.
In troubleshooting the performance of this (takes ~3 minutes) I found that one particular window function is taking all of the time, and everything else I'm doing takes mere seconds.
Here is the main area of interest - where I use window functions to fill in the previous and next user-supplied configuration values:
...ANSWER
Answered 2021-Sep-24 at 02:46The solution that doesn't answer the question
In trying various things to speed up my routine, it occurred to me to try re-rewriting my usages of first()
to just be usages of last()
with a reversed sort order.
So rewriting this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install caching
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page