sites | Simple wiki-based site generator | Wiki library
kandi X-RAY | sites Summary
kandi X-RAY | sites Summary
Spin up unlimited git-backed Gollum wiki sites on demand using a simple web interface then edit them in-browser to make prebaked websites.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Create a new page object .
- Render page page
- Returns a hash of all the names of the cache .
- Compute compression compression
- Render the layout .
- cache key
- Create a new wiki object
sites Key Features
sites Examples and Code Snippets
# flask_tracking/tracking/views.py
@tracking.route("/sites", methods=("GET", "POST"))
@login_required
def view_sites():
form = SiteForm()
if form.validate_on_submit():
Site.create(owner=current_user, **form.data)
flash("Added
Community Discussions
Trending Discussions on sites
QUESTION
This recursive definition of T
fails to type-check.
ANSWER
Answered 2022-Feb-10 at 13:27Type families are unmatchable and must be fully saturated. This means they can not be passed partially applied as an argument. This is basically what the error message says
• The associated type family ‘
T
’ should have 1 argument, but has been given none
At the present time this is not distinguished at the kind level (see Unsaturated type families proposal) but there is a very clear distinction within GHC.
According to :kind
both T
and Identity
have the same kind while in reality Identity
is Matchable and T
is Unmatchable:
QUESTION
Ever since I've upgraded my Mac
to Monteray
, I've been having issues with Vagrant
.
Initially, I use to see a vBoxManage
error on terminal
when running vagrant up
. I posted a question on this on SO previously, see here for details.
Today, I uninstalled VirtualBox
again (removed VirtualBox VMs
folder and moved application to trash) and reinstalled VirtualBox 6.1.3
for OS X hosts` (link here).
I then ran vagrant up
on terminal
and it successfully compiled:
After seeing the famous green teddy, I tried going to vvv.test
but, the page doesn't load. I've tried accessing URLs of sites that have been provisioned
before, but they too do not load.
I've also ran vagrant up --debug
, and nothing concerning was seen.
My Vagrant
version is 2.2.19
Unsure what steps to take next?
Edit:
Steps taken:
- Have ran
vagrant up --provision
to provision sites inconfig.yml
file (config.yml
file can be seen below) - Have tried to access
website-dev.test
, page doesn't load - Have tried to access
vvv.test
, page doesn't load - Have ran
vagrant reload --provision
and repeated steps 2 and 3, but same results - Have ran
vagrant halt
andvagrant up
and repeated steps 2 and 3, but same results
I don't believe there's an issue in my config.yml
file, as before Monteray
update, everything was working fine (and I've made no changes to my yml
file since). But, to cover all scenario's, here is my config.yml
file:
ANSWER
Answered 2021-Dec-15 at 18:33Thanks to guidance from @Tinxuanna, I've managed to solve the issue (finally!).
For anyone else having similar issues, here's what I did:
- Access the
/etc/hosts
folder - Find file called
hosts
and open it in a text editor. - Remove the IP addresses related to
vagrant
(I kept a backup of the original file just in case) - After saving
hosts
file the IP addresses removed, I ranvagrant up --provision
- I then ran
vagrant up
- Then accessed
vvv.test
- You're done!
QUESTION
I have 2 data frames with the same column names, but different numbers of rows. The first data frame (a) looks similar to this:
...ANSWER
Answered 2022-Jan-04 at 03:31I suggest you first replace every NA
values in b
with 0
and then use inner_join
to merge the result with the corresponding Site
values in a
. You could then replace non-NA
values of a
with their corresponding values in b
leaving NA
values in a
intact. In the end we bind the modified data frame with a subset of a
whose Site
values are not present in b
.
QUESTION
I just updated my Mac M1 to Big Sur 11.5.2 and something in VSCode seems to have broken. I am unable to use the latest home-brew php which is installed.
In VSCode its pointing to /usr/bin/php which is Macs built in php, that's not the one im using with home-brew. I tried everything and changed the path but still the same thing.
I checked the one similar question to mine and all it suggests is to use Homebrew which I already am doing so Im not sure what I am doing wrong here.
I am running PHPUnit tests in the VSCode terminal and I am getting the following error:
...ANSWER
Answered 2021-Aug-25 at 09:40I got the same problem. Open your terminal and write this:
QUESTION
Is it possible to interleave template parameters inside of function call sites?
I effectively want to do implement the following, but I don't know how (psuedo code):
...ANSWER
Answered 2021-Dec-24 at 01:58Like this:
QUESTION
Even after reading through multiple articles explaining the differences between static and SSR rendering I still don't understand how dynamic API calls work in these different modes.
I know that Nuxt has the fetch
and asyncData
hooks which are only called once during static generation, but what if I use dynamic HTTP requests inside component methods (e.g. when submitting a form via a POST request)? Does that even work in static sites?
I'm making a site that shows user generated content on most pages, so I have to make GET requests everytime one of those pages is visited to keep the content up to date. Can I do that with a static site or do I have to use SSR / something else? I don't want to use client side rendering (SPA mode) because it's slow and bad for SEO. So what is my best option?
...ANSWER
Answered 2021-Nov-21 at 12:11I use dynamic HTTP requests inside component methods (e.g. when submitting a form via a POST request)? Does that even work in static sites?
The short answer to this question is that yes, it does work. In fact you can have http requests in any life cycle hooks or methods in your code, and they all work fine with static mode too.
Static site generation and ssr mode in Nuxt.js are tools to help you with SEO issues and I will explain the difference with an example.
Imagine you have a blog post page at a url like coolsite.com/blogs
with some posts that are coming from a database.
SPA
In this mode, when a user visits the said URL server basically responds with a .js file, then in the client this .js file will be rendered. A Vue instance gets created and when the app reaches the code for the get posts
request for example in the created
hook, it makes an API call, gets the result and renders the posts to the DOM.
This is not cool for SEO since at the first app load there isn't any content and all search engine web crawlers are better at understanding content as html rather than js.
SSR
In this mode if you use the asyncData hook, when the user requests for the said URL, the server runs the code in the asyncData hook in which you should have your API call for the blog posts. It gets the result, renders it as an html page and sends that back to the user with the content already inside it (the Vue instance still gets created in the client). There is no need for any further request from client to server. Of course you still can have api calls in other methods or hooks.
The drawback here is that you need a certain way for deployment for this to work since the code must run on the server. For example you need node.js web hosting to run your app on the server.
STATIC
This mode is actually a compromise between the last two. It means you can have static web hosting but still make your app better for SEO.
The way it works is simple. You use asyncData again but here, when you are generating your app in your local machine it runs the code inside asyncData, gets the posts, and then renders the proper html for each of your app routes. So when you deploy and the user requests that URL, she/he will get a rendered page just like the one in SSR mode.
But the drawback here is that if you add a post to your database, you need to generate your app in your local machine, and update the required file(s) on your server with newly generated files in order for the user to get the latest content.
Apart from this, any other API call will work just fine since the code required for this is already shipped to the client.
Side note: I used asyncData in my example since this is the hook you should use in page level but fetch is also a Nuxt.js hook that works more or less the same for the component level.
QUESTION
I have two containers App and Webserver. Webserver is plain nginx:alpine image and App is expressjs app running on port 3030 under ubuntu:focal. I heard that this is a common practise to use separate containers for application and server. So I added proxy_pass http://app:3030/;
to nginx config. Something went wrong and I digged into this a bit. To exclude incorrect nginx setup I checked raw curl requests from webserver to app container with no luck. Here is my docker-compose:
ANSWER
Answered 2021-Oct-29 at 16:24You need to at least expose the 3030 port on the app container to make it available to other containers:
QUESTION
So I am having a twig/timber
issue where I have a difficult time getting a placeimage set, here is what I have so far:
ANSWER
Answered 2021-Oct-23 at 15:52This is an issue with the way twig/timber
defines paths!!! Supper weird way of doing things which adds unnecessary complexity to a project.
That being said, according to their documentation, you could add a filter called relative
to the end of your defaultimg
variable which converts an absolute URL into a relative one which eventually outputs the correct path to your default image.
So your code would be something like this:
- Defining the path to your default image:
QUESTION
I'm trying to run two sites on django on the same server under different ip, an error occurs that the port is busy, I fixed the ports, but the site does not start. Tell me where is the error please? Ip work, when I go to the second ip I get redirects to the first site. All settings were specified for the second site. At the end, I added the nginx setting of the first site
This is the second docker-compose file and its settings. I would be very grateful for your help
.env
...ANSWER
Answered 2021-Sep-22 at 21:54If you're running two virtual servers with different IPs on the same machine, you'd want to specify the IP address in the listen directive:
QUESTION
First of all, I've tried all recommendations from C# DNS-related SO threads and other internet articles - messing with ServicePointManager/ServicePoint settings, setting automatic request connection close via HTTP headers, changing connection lease times - nothing helped. It seems like all those settings are intended for fixing DNS issues in long-running processes (like web services). It even makes sense if a process would have it's own DNS cache to minimize DNS queries or OS DNS cache reading. But it's not my case.
The problemOur production infrastructure uses HA (high availability) DNS for swapping server nodes during maintenance or functional problems. And it's built in a way that in some places we have multiple CNAME-records which in fact point to the same HA A-record like that:
- eu.site1.myprodserver.com (CNAME) > eu.ha.myprodserver.com (A)
- eu.site2.myprodserver.com (CNAME) > eu.ha.myprodserver.com (A)
The TTL of all these records is 60 seconds. So when the European node is in trouble or maintenance, the A-record switches to the IP address of some other node.
Then we have a monitoring utility which is executed once in 5 minutes and uses both site1 and site2. For it to work properly both names must point to the same DC, because data sync between DCs doesn't happen that fast. Since both CNAMEs are in fact linked to the same A-record with short TTL at a first glance it seems like nothing can go wrong. But it turns out it can.
The utility is written in C# for .NET Framework 4.7.2 and uses HttpClient class for performing requests to both sites. Yeah, it's him again.
We have noticed that when a server node switch occurs the utility often starts acting as if site1 and site2 were in different DCs. The pattern of its behavior in such moments is strictly determined, so it's not like it gets confused somewhere in the middle of the process - it incorrecly resolves one or both of these addresses from the very start.
I've made another much simpler utility which just sends one GET-request to site1 and then started intentionally switching nodes on and off and running this utility to see which DC would serve its request. And the results were very frustrating.
Despite the Windows DNS cache already being updated (checked via ipconfig
and Get-DnsClientCache
cmdlet) and despite the overall records' TTL of 60 seconds the HttpClient keeps sending requests to the old IP address sometimes for another 15-20 minutes. Even when I've completely shut down the "outdated" application server - the utility kept trying to connect to it, so even connection failures don't wake it up.
It becomes even more frustrating if you start running ipconfig /flushdns
in between utility runs. Sometimes after flushdns the utility realizes that the IP has changed. But as soon as you make another flushdns (or this is even not needed - I haven't 100% clearly figured this out) and run the utility again - it goes back to the old address! Unbelievable!
And add even more frustration. If you resolve the IP address from within the same utility using Dns.GetHostEntry method (which uses cache as per this comment) right before calling HttpClient, the resolve result would be correct... But the HttpClient would anyway make a connection to an IP address of seemengly his own independent choice. So HttpClient somehow does not seem to rely on built-in .NET Framework DNS resolving.
So the questions are:
- Where does a newly created .NET Framework process take those cached DNS results from?
- Even if there is some kind of a mystical global .NET-specific DNS cache, then why does it absolutely ignore TTL?
- How is it possible at all that it reverts to the outdated old IP address after it has already once "understood" that the address has changed?
P.S. I have worked this all around by implementing a custom HttpClientHandler which performs DNS queries on each hostname's first usage thus it's independent from external DNS caches (except for caching at intermediate DNS servers which also affects things to some extent). But that was a little tricky in terms of TLS certificates validation and the final solution does not seem to be production ready - but we use it for monitoring only so for us it's OK. If anyone is interested in this, I'll show the class code which somewhat resembles this answer's example.
Update 2021-10-08The utility works from behind a corporate proxy. In fact there are multiple proxies for load balancing. So I am now also in process of verifying this:
- If the DNS resolving is performed by the proxies and they don't respect the TTL or if they cache (keep alive) TCP connections by hostnames - this would explain the whole problem
- If it's possible that different proxies handle HTTP requests on different runs of the utility - this would answer the most frustrating question #3
The answer to "Does .NET Framework has an OS-independent global DNS cache?" is NO. HttpClient class or .NET Framework in general had nothing to do with all of this. Posted my investigation results as an accepted answer.
...ANSWER
Answered 2021-Oct-14 at 21:32HttpClient, please forgive me! It was not your fault!
Well, this investigation was huge. And I'll have to split the answer into two parts since there turned out to be two unconnected problems.
1. The proxy server problemAs I said, the utility was being tested from behind a corporate proxy. In case if you haven't known (like I haven't till the latest days) when using a proxy server it's not your machine performing DNS queries - it's the proxy server doing this for you.
I've made some measurements to understand for how long does the utility keep connecting to the wrong DC after the DNS record switch. And the answer was the fantastic exact 30 minutes. This experiment has also clearly shown that local Windows DNS cache has nothing to do with it: those 30 minutes were starting exactly at the point when the proxy server was waking up (was finally starting to send HTTP requests to the right DC).
The exact number of 30 minutes has helped one of our administrators to finally figure out that the proxy servers have a configuration parameter of minimal DNS TTL which is set to 1800 seconds by default. So the proxies have their own DNS cache. These are hardware Cisco proxies and the admin has also noted that this parameter is "hidden quite deeply" and is not even mentioned in the user manual.
As soon as the minimal proxies' DNS TTL was changed from 1800 seconds to 1 second (yeah, admins have no mercy) the issue stopped reproducing on my machine.
But what about "forgetting" the just-understood correct IP address and falling back to the old one?Well. As I also said there are several proxies. There is a single corporate proxy DNS name, but if you run nslookup
for it - it shows multiple IPs behind it. Each time the proxy server's IP address is resolved (for example when local cache expires) - there's quite a bit of a chance that you'll jump onto another proxy server.
And that's exactly what ipconfig /flushdns
has been doing to me. As soon as I started playing around with proxy servers using their direct IP addresses instead of their common DNS name I found that different proxies may easily route identical requests to different DCs. That's because some of them have those 30-minutes-cached DNS records while others have to perform resolving.
Unfortunately, after the proxies theory has been proven, another news came in: the production monitoring servers are placed outside of the corporate network and they do not use any proxy servers. So here we go...
2. The short TTL and public DNS servers problemThe monitoring servers are configured to use 8.8.8.8 and 8.8.4.4 Google's DNS servers. Resolve responses for our short-lived DNS records from these servers are somewhat weird:
- The returned TTL of CNAME records swings at around 1 hour mark. It gradually decreases for several minutes and then jumps back to 3600 seconds - and so on.
- The returned TTL of the root A-record is almost always exactly 60 seconds. I was occasionally receiving various numbers less than 60 but there was no any obvious humanly-percievable logic. So it seems like these IP addresses in fact point to balancers that distribute requests between multiple similar DNS servers which are not synced with each other (and each of them has its own cache).
Windows is not stupid and according to my experiments it doesn't care about CNAME's TTL and only cares about the root A-record TTL, so its client cache even for CNAME records is never assigned a TTL higher than 60 seconds.
But due to the inconsistency (or in some sense over-consistency?) of the A-record TTL which Google's servers return (unpredictable 0-60 seconds) the Windows local cache gets confused. There were two facts which demonstrated it:
- Multiple calls to
Resolve-DnsName
for site1 and site2 over several minutes with random pauses between them have eventually led toGet-ClientDnsCache
showing the local cache TTLs of the two site names diverged on up to 15 seconds. This is a big enough difference to sometimes mess the things up. And that's just my short experiment, so I'm quite sure that it might actually get bigger. - Executing
Invoke-WebRequest
to each of the sites one right after another once in every 3-5 seconds while switching the DNS records has let me twicely face a situation when the requests went to different DCs.
Conclusion?The latter experiment had one strange detail I can't explain. Calling
Get-DnsClientCache
afterInvoke-WebRequest
shows no records appear in the local cache for the just-requested site names. But anyway the problem clearly has been reproduced.
It would take time to see whether my workaround with real-time DNS resolving would bring any improvement. Unfortunately, I don't believe it will - the DNS servers used at production (which would eventually be used by the monitoring utility for real-time IP resolving) are public Google DNS which are not reliable in my case.
And one thing which is worse than an intermittently failing monitoring utility is that real-world users are also relying on public DNS servers and they definitely do face problems during our maintenance works or significant failures.
So have we learned anything out of all this?
- Maybe a short DNS TTL is generally a bad practice?
- Maybe we should install additional routers, assign them static IPs, attach the DNS names to them and then route traffic internally between our DCs to finally stop relying on DNS records changing?
- Or maybe public DNS servers are doing a bad job?
- Or maybe the technological singularity is closer than we think?
I have no idea. But its quite possible that "yes" is the right answer to all of these questions.
However there is one thing we surely have learned: network hardware manufacturers shall write their documentation better.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install sites
$ mv .env.example .env
$ bundle install
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page