kandi background
Explore Kits

cat | CAT 作为服务端项目基础组件,提供了 Java, C/C, Node.js, Python, Go 等多语言客户端,已经在美团点评的基础架构中间件框架(MVC框架,RPC框架,数据库框架,缓存框架等,消息队列,配置系统等)深度集成,为美团点评各业务线提供系统丰富的性能指标、健康状况、实时告警等。 | Monitoring library

 by   dianping Java Version: v3.0.0 License: Apache-2.0

 by   dianping Java Version: v3.0.0 License: Apache-2.0

Download this library from

kandi X-RAY | cat Summary

cat is a Java library typically used in Performance Management, Monitoring, Prometheus applications. cat has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can install using 'pip install cat' or download it from GitHub, PyPI.
CAT 作为服务端项目基础组件,提供了 Java, C/C++, Node.js, Python, Go 等多语言客户端,已经在美团点评的基础架构中间件框架(MVC框架,RPC框架,数据库框架,缓存框架等,消息队列,配置系统等)深度集成,为美团点评各业务线提供系统丰富的性能指标、健康状况、实时告警等。
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • cat has a highly active ecosystem.
  • It has 16597 star(s) with 5143 fork(s). There are 1155 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 90 open issues and 1059 have been closed. On average issues are closed in 443 days. There are 1 open pull requests and 0 closed requests.
  • It has a positive sentiment in the developer community.
  • The latest version of cat is v3.0.0
cat Support
Best in #Monitoring
Average in #Monitoring
cat Support
Best in #Monitoring
Average in #Monitoring

quality kandi Quality

  • cat has 0 bugs and 0 code smells.
cat Quality
Best in #Monitoring
Average in #Monitoring
cat Quality
Best in #Monitoring
Average in #Monitoring

securitySecurity

  • cat has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • cat code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
cat Security
Best in #Monitoring
Average in #Monitoring
cat Security
Best in #Monitoring
Average in #Monitoring

license License

  • cat is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
cat License
Best in #Monitoring
Average in #Monitoring
cat License
Best in #Monitoring
Average in #Monitoring

buildReuse

  • cat releases are available to install and integrate.
  • Deployable package is available in PyPI.
  • Build file is available. You can build the component from source.
  • Installation instructions are available. Examples and code snippets are not available.
  • cat saves you 303681 person hours of effort in developing the same functionality from scratch.
  • It has 291351 lines of code, 9565 functions and 3037 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
cat Reuse
Best in #Monitoring
Average in #Monitoring
cat Reuse
Best in #Monitoring
Average in #Monitoring
Top functions reviewed by kandi - BETA

kandi has reviewed cat and discovered the below as its top functions. This is intended to give you an instant insight into cat implemented functionality, and help decide if they suit your requirements.

  • Decodes a line .
  • Process the payload .
  • Visit a StatusInfo object .
  • Gets state info .
  • Start the Jetty server .
  • Get the next value .
  • Loads the message and stores it in the bucket .
  • Handle business pages .
  • Parse root element .
  • Refresh inner config .

cat Key Features

CAT 作为服务端项目基础组件,提供了 Java, C/C++, Node.js, Python, Go 等多语言客户端,已经在美团点评的基础架构中间件框架(MVC框架,RPC框架,数据库框架,缓存框架等,消息队列,配置系统等)深度集成,为美团点评各业务线提供系统丰富的性能指标、健康状况、实时告警等。

Error: require() of ES modules is not supported when importing node-fetch

copy iconCopydownload iconDownload
// mod.cjs
const fetch = (...args) => import('node-fetch').then(({default: fetch}) => fetch(...args));
import { RequestInfo, RequestInit } from "node-fetch";

const fetch = (url: RequestInfo, init?: RequestInit) =>  import("node-fetch").then(({ default: fetch }) => fetch(url, init));
-----------------------
// mod.cjs
const fetch = (...args) => import('node-fetch').then(({default: fetch}) => fetch(...args));
import { RequestInfo, RequestInit } from "node-fetch";

const fetch = (url: RequestInfo, init?: RequestInit) =>  import("node-fetch").then(({ default: fetch }) => fetch(url, init));
-----------------------
"use strict";

let got;
let fetch;

module.exports.load = async function() {
    queueMicrotask(function() {
        // any ESM-only module that needs to be loaded can be loaded here, just add import for each using specific structure of each
        import("got").then(({default: Got}) => got = Got );
        fetch = (...args) => import('node-fetch').then(({default: fetch}) => fetch(...args));
    });

    while(
        // check each module to see if it's been loaded here
        !got || !got.get || !fetch || typeof fetch !== "function"
         ) {
        // waiting for modules to load
        console.log("Waiting for ES-Only modules to load...");
        await new Promise((resolve)=>setTimeout(resolve, 1000));
    }
    module.exports.got = got;
    module.exports.fetch = fetch;
    console.log("ES-Only modules finished loading!");
}
"use strict";

const esmModules = require("esm_modules"); // or whatever you called the intermiary module
async function doMyFetching(url, options) {
   const fetch = esmModules.fetch;
   const result = await fetch(url, options)
}
-----------------------
"use strict";

let got;
let fetch;

module.exports.load = async function() {
    queueMicrotask(function() {
        // any ESM-only module that needs to be loaded can be loaded here, just add import for each using specific structure of each
        import("got").then(({default: Got}) => got = Got );
        fetch = (...args) => import('node-fetch').then(({default: fetch}) => fetch(...args));
    });

    while(
        // check each module to see if it's been loaded here
        !got || !got.get || !fetch || typeof fetch !== "function"
         ) {
        // waiting for modules to load
        console.log("Waiting for ES-Only modules to load...");
        await new Promise((resolve)=>setTimeout(resolve, 1000));
    }
    module.exports.got = got;
    module.exports.fetch = fetch;
    console.log("ES-Only modules finished loading!");
}
"use strict";

const esmModules = require("esm_modules"); // or whatever you called the intermiary module
async function doMyFetching(url, options) {
   const fetch = esmModules.fetch;
   const result = await fetch(url, options)
}
-----------------------
yarn add node-fetch@^2.6.6

or

npm install node-fetch@^2.6.6
{
  "compilerOptions": { "allowJs": true, "outDir": "./dist" },
}
-----------------------
yarn add node-fetch@^2.6.6

or

npm install node-fetch@^2.6.6
{
  "compilerOptions": { "allowJs": true, "outDir": "./dist" },
}
-----------------------
const _importDynamic = new Function('modulePath', 'return import(modulePath)');

export const fetch = async function (...args: any) {
    const {default: fetch} = await _importDynamic('node-fetch');
    return fetch(...args);
}

The unauthenticated git protocol on port 9418 is no longer supported

copy iconCopydownload iconDownload
    - name: Fix up git URLs
      run: echo -e '[url "https://github.com/"]\n  insteadOf = "git://github.com/"' >> ~/.gitconfig
git config --global url."https://github.com/".insteadOf git://github.com/
git config --global url."git@github.com:".insteadOf git://github.com/
-----------------------
    - name: Fix up git URLs
      run: echo -e '[url "https://github.com/"]\n  insteadOf = "git://github.com/"' >> ~/.gitconfig
git config --global url."https://github.com/".insteadOf git://github.com/
git config --global url."git@github.com:".insteadOf git://github.com/
-----------------------
    - name: Fix up git URLs
      run: echo -e '[url "https://github.com/"]\n  insteadOf = "git://github.com/"' >> ~/.gitconfig
git config --global url."https://github.com/".insteadOf git://github.com/
git config --global url."git@github.com:".insteadOf git://github.com/
-----------------------
git config --global url."https://".insteadOf git://
-----------------------
[remote "upstream"]
    url = git://github.com/curlconverter/curlconverter.git
    fetch = +refs/heads/*:refs/remotes/upstream/*
[remote "upstream"]
    url = git@github.com:curlconverter/curlconverter.git
    fetch = +refs/heads/*:refs/remotes/upstream/*
-----------------------
[remote "upstream"]
    url = git://github.com/curlconverter/curlconverter.git
    fetch = +refs/heads/*:refs/remotes/upstream/*
[remote "upstream"]
    url = git@github.com:curlconverter/curlconverter.git
    fetch = +refs/heads/*:refs/remotes/upstream/*
-----------------------
git config --global url."https://github".insteadOf git://github
Unhandled rejection Error: Command failed: /usr/bin/git submodule update -q --init --recursive
warning: templates not found /tmp/pacote-git-template-tmp/git-clone-a001527f
fatal: remote error:
  The unauthenticated git protocol on port 9418 is no longer supported.
Please see https://github.blog/2021-09-01-improving-git-protocol-security-github/ for more information.
fatal: clone of 'git://github.com/jquery/sizzle.git' into submodule path '/root/.npm/_cacache/tmp/git-clone-19674e32/src/sizzle' failed
Failed to clone 'src/sizzle'. Retry scheduled
warning: templates not found /tmp/pacote-git-template-tmp/git-clone-a001527f
-----------------------
git config --global url."https://github".insteadOf git://github
Unhandled rejection Error: Command failed: /usr/bin/git submodule update -q --init --recursive
warning: templates not found /tmp/pacote-git-template-tmp/git-clone-a001527f
fatal: remote error:
  The unauthenticated git protocol on port 9418 is no longer supported.
Please see https://github.blog/2021-09-01-improving-git-protocol-security-github/ for more information.
fatal: clone of 'git://github.com/jquery/sizzle.git' into submodule path '/root/.npm/_cacache/tmp/git-clone-19674e32/src/sizzle' failed
Failed to clone 'src/sizzle'. Retry scheduled
warning: templates not found /tmp/pacote-git-template-tmp/git-clone-a001527f
-----------------------
    insteadOf = ssh://
    insteadOf = git://
-----------------------
    insteadOf = ssh://
    insteadOf = git://

Dataframe from a character vector where variable name and its data were stored jointly

copy iconCopydownload iconDownload
out <- type.convert(as.data.frame(read.dcf(
    textConnection(paste(gsub("\\s+\\|\\s+", "\n", foo$vars), 
    collapse="\n\n")))), as.is = TRUE)
> out
  animal wks site PI  GI
1  mouse  12 cage 78  NA
2    dog  32 <NA> NA 0.2
3    cat   8 wild 13  NA
> str(out)
'data.frame':   3 obs. of  5 variables:
 $ animal: chr  "mouse" "dog" "cat"
 $ wks   : int  12 32 8
 $ site  : chr  "cage" NA "wild"
 $ PI    : int  78 NA 13
 $ GI    : num  NA 0.2 NA
-----------------------
out <- type.convert(as.data.frame(read.dcf(
    textConnection(paste(gsub("\\s+\\|\\s+", "\n", foo$vars), 
    collapse="\n\n")))), as.is = TRUE)
> out
  animal wks site PI  GI
1  mouse  12 cage 78  NA
2    dog  32 <NA> NA 0.2
3    cat   8 wild 13  NA
> str(out)
'data.frame':   3 obs. of  5 variables:
 $ animal: chr  "mouse" "dog" "cat"
 $ wks   : int  12 32 8
 $ site  : chr  "cage" NA "wild"
 $ PI    : int  78 NA 13
 $ GI    : num  NA 0.2 NA
-----------------------
library(dplyr)
library(tidyr)

tibble(foo) %>%
  mutate(row = row_number()) %>% 
  separate_rows(vars, sep = '\\|') %>% 
  separate(vars, c("a", "b"), sep = '\\:') %>% 
  mutate(across(everything(), str_trim)) %>% 
  group_by(a) %>% 
  pivot_wider(names_from = a, values_from = b) %>% 
  type.convert(as.is = TRUE) %>% 
  select(-row)
  animal   wks site     PI    GI
  <chr>  <int> <chr> <int> <dbl>
1 mouse     12 cage     78  NA  
2 dog       32 NA       NA   0.2
3 cat        8 wild     13  NA 
-----------------------
library(dplyr)
library(tidyr)

tibble(foo) %>%
  mutate(row = row_number()) %>% 
  separate_rows(vars, sep = '\\|') %>% 
  separate(vars, c("a", "b"), sep = '\\:') %>% 
  mutate(across(everything(), str_trim)) %>% 
  group_by(a) %>% 
  pivot_wider(names_from = a, values_from = b) %>% 
  type.convert(as.is = TRUE) %>% 
  select(-row)
  animal   wks site     PI    GI
  <chr>  <int> <chr> <int> <dbl>
1 mouse     12 cage     78  NA  
2 dog       32 NA       NA   0.2
3 cat        8 wild     13  NA 
-----------------------
type.convert(
  Reduce(
    function(x, y) merge(x, y, all = TRUE),
    lapply(
      strsplit(foo$vars, ":|\\|"),
      function(x) {
        m <- matrix(trimws(x), 2)
        setNames(data.frame(m[2, , drop = FALSE]), m[1, ])
      }
    )
  ),
  as.is = TRUE
)
  animal wks site PI  GI
1    cat   8 wild 13  NA
2    dog  32 <NA> NA 0.2
3  mouse  12 cage 78  NA
-----------------------
type.convert(
  Reduce(
    function(x, y) merge(x, y, all = TRUE),
    lapply(
      strsplit(foo$vars, ":|\\|"),
      function(x) {
        m <- matrix(trimws(x), 2)
        setNames(data.frame(m[2, , drop = FALSE]), m[1, ])
      }
    )
  ),
  as.is = TRUE
)
  animal wks site PI  GI
1    cat   8 wild 13  NA
2    dog  32 <NA> NA 0.2
3  mouse  12 cage 78  NA
-----------------------
lapply(1:nrow(foo), \(x) 
       scan(text=foo[x, ], what=character(), sep='|', strip.white=T, qui=T) |>
  (\(.) do.call(rbind, strsplit(., ': ')))() |>
  (\(.) setNames(data.frame(t(.[, 2])), .[, 1]))()) |>
  (\(.) Reduce(\(...) merge(..., all=TRUE), .))()
#   animal wks site   PI   GI
# 1    cat   8 wild   13 <NA>
# 2    dog  32 <NA> <NA>  0.2
# 3  mouse  12 cage   78 <NA>

Merge two files and add computation and sorting the updated data in python

copy iconCopydownload iconDownload
import pandas as pd


def read_file(fn):
    """
    Read file fn and convert data into a dict of dict.
    data = {pname1: {grp: grp1, pname: pname1, cnt: cnt1, cat: cat1},
            pname2: {gpr: grp2, ...} ...}
    """
    data = {}
    with open(fn, 'r') as f:
        for lines in f:
            line = lines.rstrip()
            grp, pname, cnt, cat = line.split(maxsplit=3)
            data.update({pname: {'grp': float(grp.replace(',', '')), 'pname': pname, 'cnt': int(cnt), 'cat': cat}})
            
    return data


def process_data(oldfn, newfn):  
    """
    Read old and new files, update the old file based on new file.
    Save output to text, and csv files.
    """
    # Get old and new data in dict.
    old = read_file(oldfn)
    new = read_file(newfn)

    # Update old data based on new data
    u_data = {}
    for ko, vo in old.items():
        if ko in new:
            n = new[ko]
            
            # Update cnt.
            old_cnt = vo['cnt']
            new_cnt = n['cnt']
            u_cnt = old_cnt + new_cnt

            # cnt change, if old is zero we set it to 1 to avoid division by zero error.
            tmp_old_cnt = 1 if old_cnt == 0 else old_cnt
            cnt_change = 100 * (new_cnt - tmp_old_cnt) / tmp_old_cnt

            # grp change
            old_grp = vo['grp']
            new_grp = n['grp']
            grp_change = 100 * (new_grp - old_grp) / old_grp

            u_data.update({ko: {'grp': n['grp'], 'pname': n['pname'], 'cnt': u_cnt, 'cat': n['cat'],
                                'cnt_change%': round(cnt_change, 2), 'grp_change%': round(grp_change, 2)}})

    # add new data to u_data, that is not in old data
    for kn, vn in new.items():
        if kn not in old:        
            # Since this is new item its old cnt is zero, we set it to 1 to avoid division by zero error.
            old_cnt = 1
            new_cnt = vn['cnt']
            cnt_change = 100 * (new_cnt - old_cnt) / old_cnt        

            # grp change is similar to cnt change
            old_grp = 1
            new_grp = vn['grp']
            grp_change = 100 * (new_grp - old_grp) / old_grp
            
            # Update new columns.
            vn.update({'cnt_change%': round(cnt_change, 2), 'grp_change%': round(grp_change, 2)})        
            u_data.update({kn: vn})
            
    # Create new data mydata list from u_data, and only extract the dict value.
    mydata = []
    for _, v in u_data.items():
        mydata.append(v)
        
    # Convert mydata into pandas dataframe to easier manage the data.
    df = pd.DataFrame(mydata)
    df = df.sort_values(by=['cnt'], ascending=False)  # sort on cnt column
    
    # Save to csv file.
    df.to_csv('output.csv', index=False)
    
    # Save to text file.
    with open('output.txt', 'w') as w:
        w.write(f'{df.to_string(index=False)}')
        
    # Print in console.    
    print(df.to_string(index=False))


# Start
oldfn = 'F:/Tmp/oldFile.txt'
newfn = 'F:/Tmp/newFile.txt'
process_data(oldfn, newfn)
   grp                    pname  cnt                      cat  cnt_change%  grp_change%
8739.0 6ea059a29eccecee4e250414   62   MAXIMACASH (MAXCAS...)      2900.00       219.06
 138.0 1c6bc8e962427deb4106ae06   58          Charge (Charge)       525.00       -68.49
 860.0 31b5c07636dab8f0909dbd2d   46 Buff Unicorn (BUFFUN...)       566.67       272.29
 200.0 9e4d81c8fc15870b15aef8dc   33          BABY BNB (BBNB)       900.00       -28.32
  20.0 5esdsds2sd15870b15aef8dc   30       CharliesAngel (CA)      2900.00      1900.00
1560.0 c15e89f2149bcc0cbd5fb204   24          HUH_Token (HUH)       400.00        11.75
   grp                    pname  cnt                      cat  cnt_change%  grp_change%
8739.0 6ea059a29eccecee4e250414   62   MAXIMACASH (MAXCAS...)      2900.00       219.06
 138.0 1c6bc8e962427deb4106ae06   58          Charge (Charge)       525.00       -68.49
 860.0 31b5c07636dab8f0909dbd2d   46 Buff Unicorn (BUFFUN...)       566.67       272.29
 200.0 9e4d81c8fc15870b15aef8dc   33          BABY BNB (BBNB)       900.00       -28.32
  20.0 5esdsds2sd15870b15aef8dc   30       CharliesAngel (CA)      2900.00      1900.00
1560.0 c15e89f2149bcc0cbd5fb204   24          HUH_Token (HUH)       400.00        11.75
grp,pname,cnt,cat,cnt_change%,grp_change%
8739.0,6ea059a29eccecee4e250414,62,MAXIMACASH (MAXCAS...),2900.0,219.06
138.0,1c6bc8e962427deb4106ae06,58,Charge (Charge),525.0,-68.49
860.0,31b5c07636dab8f0909dbd2d,46,Buff Unicorn (BUFFUN...),566.67,272.29
200.0,9e4d81c8fc15870b15aef8dc,33,BABY BNB (BBNB),900.0,-28.32
20.0,5esdsds2sd15870b15aef8dc,30,CharliesAngel (CA),2900.0,1900.0
1560.0,c15e89f2149bcc0cbd5fb204,24,HUH_Token (HUH),400.0,11.75
-----------------------
import pandas as pd


def read_file(fn):
    """
    Read file fn and convert data into a dict of dict.
    data = {pname1: {grp: grp1, pname: pname1, cnt: cnt1, cat: cat1},
            pname2: {gpr: grp2, ...} ...}
    """
    data = {}
    with open(fn, 'r') as f:
        for lines in f:
            line = lines.rstrip()
            grp, pname, cnt, cat = line.split(maxsplit=3)
            data.update({pname: {'grp': float(grp.replace(',', '')), 'pname': pname, 'cnt': int(cnt), 'cat': cat}})
            
    return data


def process_data(oldfn, newfn):  
    """
    Read old and new files, update the old file based on new file.
    Save output to text, and csv files.
    """
    # Get old and new data in dict.
    old = read_file(oldfn)
    new = read_file(newfn)

    # Update old data based on new data
    u_data = {}
    for ko, vo in old.items():
        if ko in new:
            n = new[ko]
            
            # Update cnt.
            old_cnt = vo['cnt']
            new_cnt = n['cnt']
            u_cnt = old_cnt + new_cnt

            # cnt change, if old is zero we set it to 1 to avoid division by zero error.
            tmp_old_cnt = 1 if old_cnt == 0 else old_cnt
            cnt_change = 100 * (new_cnt - tmp_old_cnt) / tmp_old_cnt

            # grp change
            old_grp = vo['grp']
            new_grp = n['grp']
            grp_change = 100 * (new_grp - old_grp) / old_grp

            u_data.update({ko: {'grp': n['grp'], 'pname': n['pname'], 'cnt': u_cnt, 'cat': n['cat'],
                                'cnt_change%': round(cnt_change, 2), 'grp_change%': round(grp_change, 2)}})

    # add new data to u_data, that is not in old data
    for kn, vn in new.items():
        if kn not in old:        
            # Since this is new item its old cnt is zero, we set it to 1 to avoid division by zero error.
            old_cnt = 1
            new_cnt = vn['cnt']
            cnt_change = 100 * (new_cnt - old_cnt) / old_cnt        

            # grp change is similar to cnt change
            old_grp = 1
            new_grp = vn['grp']
            grp_change = 100 * (new_grp - old_grp) / old_grp
            
            # Update new columns.
            vn.update({'cnt_change%': round(cnt_change, 2), 'grp_change%': round(grp_change, 2)})        
            u_data.update({kn: vn})
            
    # Create new data mydata list from u_data, and only extract the dict value.
    mydata = []
    for _, v in u_data.items():
        mydata.append(v)
        
    # Convert mydata into pandas dataframe to easier manage the data.
    df = pd.DataFrame(mydata)
    df = df.sort_values(by=['cnt'], ascending=False)  # sort on cnt column
    
    # Save to csv file.
    df.to_csv('output.csv', index=False)
    
    # Save to text file.
    with open('output.txt', 'w') as w:
        w.write(f'{df.to_string(index=False)}')
        
    # Print in console.    
    print(df.to_string(index=False))


# Start
oldfn = 'F:/Tmp/oldFile.txt'
newfn = 'F:/Tmp/newFile.txt'
process_data(oldfn, newfn)
   grp                    pname  cnt                      cat  cnt_change%  grp_change%
8739.0 6ea059a29eccecee4e250414   62   MAXIMACASH (MAXCAS...)      2900.00       219.06
 138.0 1c6bc8e962427deb4106ae06   58          Charge (Charge)       525.00       -68.49
 860.0 31b5c07636dab8f0909dbd2d   46 Buff Unicorn (BUFFUN...)       566.67       272.29
 200.0 9e4d81c8fc15870b15aef8dc   33          BABY BNB (BBNB)       900.00       -28.32
  20.0 5esdsds2sd15870b15aef8dc   30       CharliesAngel (CA)      2900.00      1900.00
1560.0 c15e89f2149bcc0cbd5fb204   24          HUH_Token (HUH)       400.00        11.75
   grp                    pname  cnt                      cat  cnt_change%  grp_change%
8739.0 6ea059a29eccecee4e250414   62   MAXIMACASH (MAXCAS...)      2900.00       219.06
 138.0 1c6bc8e962427deb4106ae06   58          Charge (Charge)       525.00       -68.49
 860.0 31b5c07636dab8f0909dbd2d   46 Buff Unicorn (BUFFUN...)       566.67       272.29
 200.0 9e4d81c8fc15870b15aef8dc   33          BABY BNB (BBNB)       900.00       -28.32
  20.0 5esdsds2sd15870b15aef8dc   30       CharliesAngel (CA)      2900.00      1900.00
1560.0 c15e89f2149bcc0cbd5fb204   24          HUH_Token (HUH)       400.00        11.75
grp,pname,cnt,cat,cnt_change%,grp_change%
8739.0,6ea059a29eccecee4e250414,62,MAXIMACASH (MAXCAS...),2900.0,219.06
138.0,1c6bc8e962427deb4106ae06,58,Charge (Charge),525.0,-68.49
860.0,31b5c07636dab8f0909dbd2d,46,Buff Unicorn (BUFFUN...),566.67,272.29
200.0,9e4d81c8fc15870b15aef8dc,33,BABY BNB (BBNB),900.0,-28.32
20.0,5esdsds2sd15870b15aef8dc,30,CharliesAngel (CA),2900.0,1900.0
1560.0,c15e89f2149bcc0cbd5fb204,24,HUH_Token (HUH),400.0,11.75
-----------------------
import pandas as pd


def read_file(fn):
    """
    Read file fn and convert data into a dict of dict.
    data = {pname1: {grp: grp1, pname: pname1, cnt: cnt1, cat: cat1},
            pname2: {gpr: grp2, ...} ...}
    """
    data = {}
    with open(fn, 'r') as f:
        for lines in f:
            line = lines.rstrip()
            grp, pname, cnt, cat = line.split(maxsplit=3)
            data.update({pname: {'grp': float(grp.replace(',', '')), 'pname': pname, 'cnt': int(cnt), 'cat': cat}})
            
    return data


def process_data(oldfn, newfn):  
    """
    Read old and new files, update the old file based on new file.
    Save output to text, and csv files.
    """
    # Get old and new data in dict.
    old = read_file(oldfn)
    new = read_file(newfn)

    # Update old data based on new data
    u_data = {}
    for ko, vo in old.items():
        if ko in new:
            n = new[ko]
            
            # Update cnt.
            old_cnt = vo['cnt']
            new_cnt = n['cnt']
            u_cnt = old_cnt + new_cnt

            # cnt change, if old is zero we set it to 1 to avoid division by zero error.
            tmp_old_cnt = 1 if old_cnt == 0 else old_cnt
            cnt_change = 100 * (new_cnt - tmp_old_cnt) / tmp_old_cnt

            # grp change
            old_grp = vo['grp']
            new_grp = n['grp']
            grp_change = 100 * (new_grp - old_grp) / old_grp

            u_data.update({ko: {'grp': n['grp'], 'pname': n['pname'], 'cnt': u_cnt, 'cat': n['cat'],
                                'cnt_change%': round(cnt_change, 2), 'grp_change%': round(grp_change, 2)}})

    # add new data to u_data, that is not in old data
    for kn, vn in new.items():
        if kn not in old:        
            # Since this is new item its old cnt is zero, we set it to 1 to avoid division by zero error.
            old_cnt = 1
            new_cnt = vn['cnt']
            cnt_change = 100 * (new_cnt - old_cnt) / old_cnt        

            # grp change is similar to cnt change
            old_grp = 1
            new_grp = vn['grp']
            grp_change = 100 * (new_grp - old_grp) / old_grp
            
            # Update new columns.
            vn.update({'cnt_change%': round(cnt_change, 2), 'grp_change%': round(grp_change, 2)})        
            u_data.update({kn: vn})
            
    # Create new data mydata list from u_data, and only extract the dict value.
    mydata = []
    for _, v in u_data.items():
        mydata.append(v)
        
    # Convert mydata into pandas dataframe to easier manage the data.
    df = pd.DataFrame(mydata)
    df = df.sort_values(by=['cnt'], ascending=False)  # sort on cnt column
    
    # Save to csv file.
    df.to_csv('output.csv', index=False)
    
    # Save to text file.
    with open('output.txt', 'w') as w:
        w.write(f'{df.to_string(index=False)}')
        
    # Print in console.    
    print(df.to_string(index=False))


# Start
oldfn = 'F:/Tmp/oldFile.txt'
newfn = 'F:/Tmp/newFile.txt'
process_data(oldfn, newfn)
   grp                    pname  cnt                      cat  cnt_change%  grp_change%
8739.0 6ea059a29eccecee4e250414   62   MAXIMACASH (MAXCAS...)      2900.00       219.06
 138.0 1c6bc8e962427deb4106ae06   58          Charge (Charge)       525.00       -68.49
 860.0 31b5c07636dab8f0909dbd2d   46 Buff Unicorn (BUFFUN...)       566.67       272.29
 200.0 9e4d81c8fc15870b15aef8dc   33          BABY BNB (BBNB)       900.00       -28.32
  20.0 5esdsds2sd15870b15aef8dc   30       CharliesAngel (CA)      2900.00      1900.00
1560.0 c15e89f2149bcc0cbd5fb204   24          HUH_Token (HUH)       400.00        11.75
   grp                    pname  cnt                      cat  cnt_change%  grp_change%
8739.0 6ea059a29eccecee4e250414   62   MAXIMACASH (MAXCAS...)      2900.00       219.06
 138.0 1c6bc8e962427deb4106ae06   58          Charge (Charge)       525.00       -68.49
 860.0 31b5c07636dab8f0909dbd2d   46 Buff Unicorn (BUFFUN...)       566.67       272.29
 200.0 9e4d81c8fc15870b15aef8dc   33          BABY BNB (BBNB)       900.00       -28.32
  20.0 5esdsds2sd15870b15aef8dc   30       CharliesAngel (CA)      2900.00      1900.00
1560.0 c15e89f2149bcc0cbd5fb204   24          HUH_Token (HUH)       400.00        11.75
grp,pname,cnt,cat,cnt_change%,grp_change%
8739.0,6ea059a29eccecee4e250414,62,MAXIMACASH (MAXCAS...),2900.0,219.06
138.0,1c6bc8e962427deb4106ae06,58,Charge (Charge),525.0,-68.49
860.0,31b5c07636dab8f0909dbd2d,46,Buff Unicorn (BUFFUN...),566.67,272.29
200.0,9e4d81c8fc15870b15aef8dc,33,BABY BNB (BBNB),900.0,-28.32
20.0,5esdsds2sd15870b15aef8dc,30,CharliesAngel (CA),2900.0,1900.0
1560.0,c15e89f2149bcc0cbd5fb204,24,HUH_Token (HUH),400.0,11.75
-----------------------
import pandas as pd


def read_file(fn):
    """
    Read file fn and convert data into a dict of dict.
    data = {pname1: {grp: grp1, pname: pname1, cnt: cnt1, cat: cat1},
            pname2: {gpr: grp2, ...} ...}
    """
    data = {}
    with open(fn, 'r') as f:
        for lines in f:
            line = lines.rstrip()
            grp, pname, cnt, cat = line.split(maxsplit=3)
            data.update({pname: {'grp': float(grp.replace(',', '')), 'pname': pname, 'cnt': int(cnt), 'cat': cat}})
            
    return data


def process_data(oldfn, newfn):  
    """
    Read old and new files, update the old file based on new file.
    Save output to text, and csv files.
    """
    # Get old and new data in dict.
    old = read_file(oldfn)
    new = read_file(newfn)

    # Update old data based on new data
    u_data = {}
    for ko, vo in old.items():
        if ko in new:
            n = new[ko]
            
            # Update cnt.
            old_cnt = vo['cnt']
            new_cnt = n['cnt']
            u_cnt = old_cnt + new_cnt

            # cnt change, if old is zero we set it to 1 to avoid division by zero error.
            tmp_old_cnt = 1 if old_cnt == 0 else old_cnt
            cnt_change = 100 * (new_cnt - tmp_old_cnt) / tmp_old_cnt

            # grp change
            old_grp = vo['grp']
            new_grp = n['grp']
            grp_change = 100 * (new_grp - old_grp) / old_grp

            u_data.update({ko: {'grp': n['grp'], 'pname': n['pname'], 'cnt': u_cnt, 'cat': n['cat'],
                                'cnt_change%': round(cnt_change, 2), 'grp_change%': round(grp_change, 2)}})

    # add new data to u_data, that is not in old data
    for kn, vn in new.items():
        if kn not in old:        
            # Since this is new item its old cnt is zero, we set it to 1 to avoid division by zero error.
            old_cnt = 1
            new_cnt = vn['cnt']
            cnt_change = 100 * (new_cnt - old_cnt) / old_cnt        

            # grp change is similar to cnt change
            old_grp = 1
            new_grp = vn['grp']
            grp_change = 100 * (new_grp - old_grp) / old_grp
            
            # Update new columns.
            vn.update({'cnt_change%': round(cnt_change, 2), 'grp_change%': round(grp_change, 2)})        
            u_data.update({kn: vn})
            
    # Create new data mydata list from u_data, and only extract the dict value.
    mydata = []
    for _, v in u_data.items():
        mydata.append(v)
        
    # Convert mydata into pandas dataframe to easier manage the data.
    df = pd.DataFrame(mydata)
    df = df.sort_values(by=['cnt'], ascending=False)  # sort on cnt column
    
    # Save to csv file.
    df.to_csv('output.csv', index=False)
    
    # Save to text file.
    with open('output.txt', 'w') as w:
        w.write(f'{df.to_string(index=False)}')
        
    # Print in console.    
    print(df.to_string(index=False))


# Start
oldfn = 'F:/Tmp/oldFile.txt'
newfn = 'F:/Tmp/newFile.txt'
process_data(oldfn, newfn)
   grp                    pname  cnt                      cat  cnt_change%  grp_change%
8739.0 6ea059a29eccecee4e250414   62   MAXIMACASH (MAXCAS...)      2900.00       219.06
 138.0 1c6bc8e962427deb4106ae06   58          Charge (Charge)       525.00       -68.49
 860.0 31b5c07636dab8f0909dbd2d   46 Buff Unicorn (BUFFUN...)       566.67       272.29
 200.0 9e4d81c8fc15870b15aef8dc   33          BABY BNB (BBNB)       900.00       -28.32
  20.0 5esdsds2sd15870b15aef8dc   30       CharliesAngel (CA)      2900.00      1900.00
1560.0 c15e89f2149bcc0cbd5fb204   24          HUH_Token (HUH)       400.00        11.75
   grp                    pname  cnt                      cat  cnt_change%  grp_change%
8739.0 6ea059a29eccecee4e250414   62   MAXIMACASH (MAXCAS...)      2900.00       219.06
 138.0 1c6bc8e962427deb4106ae06   58          Charge (Charge)       525.00       -68.49
 860.0 31b5c07636dab8f0909dbd2d   46 Buff Unicorn (BUFFUN...)       566.67       272.29
 200.0 9e4d81c8fc15870b15aef8dc   33          BABY BNB (BBNB)       900.00       -28.32
  20.0 5esdsds2sd15870b15aef8dc   30       CharliesAngel (CA)      2900.00      1900.00
1560.0 c15e89f2149bcc0cbd5fb204   24          HUH_Token (HUH)       400.00        11.75
grp,pname,cnt,cat,cnt_change%,grp_change%
8739.0,6ea059a29eccecee4e250414,62,MAXIMACASH (MAXCAS...),2900.0,219.06
138.0,1c6bc8e962427deb4106ae06,58,Charge (Charge),525.0,-68.49
860.0,31b5c07636dab8f0909dbd2d,46,Buff Unicorn (BUFFUN...),566.67,272.29
200.0,9e4d81c8fc15870b15aef8dc,33,BABY BNB (BBNB),900.0,-28.32
20.0,5esdsds2sd15870b15aef8dc,30,CharliesAngel (CA),2900.0,1900.0
1560.0,c15e89f2149bcc0cbd5fb204,24,HUH_Token (HUH),400.0,11.75
-----------------------
from convtools import conversion as c
from convtools.contrib.tables import Table

# your percentage change calculation
def c_change(column_name):
    return c.if_(
        c.and_(
            c.col(f"{column_name}_LEFT"),
            c.col(f"{column_name}_RIGHT").is_not(None),
        ),
        (
            (c.col(f"{column_name}_RIGHT") - c.col(f"{column_name}_LEFT"))
            / c.col(f"{column_name}_LEFT")
            * 100.0
        ).pipe(round, 2),
        None,
    )

prepare_columns = {
    "COLUMN_0": c.col("COLUMN_0").as_type(float),
    "COLUMN_2": c.col("COLUMN_2").as_type(float),
}
dialect = Table.csv_dialect(delimiter="\t")

sorted_rows = sorted(
    Table.from_csv("tmp1.csv", dialect=dialect)
    .update(**prepare_columns)
    .join(
        Table.from_csv(
            "tmp2.csv",
            dialect=dialect,
        ).update(**prepare_columns),
        on=["COLUMN_1", "COLUMN_3"],
        how="full",
    )
    .update(
        COLUMN_4=c_change("COLUMN_2"),
        COLUMN_5=c_change("COLUMN_0"),
        COLUMN_2=c.col("COLUMN_2_RIGHT"),
        COLUMN_0=c.col("COLUMN_0_RIGHT"),
    )
    .take(
        "COLUMN_0",
        "COLUMN_1",
        "COLUMN_2",
        "COLUMN_3",
        "COLUMN_4",
        "COLUMN_5",
    )
    .into_iter_rows(tuple),
    key=lambda row: row[2],
    reverse=True,
)

Table.from_rows(sorted_rows).into_csv("tmp_result.csv", dialect=dialect)

COLUMN_0    COLUMN_1    COLUMN_2    COLUMN_3    COLUMN_4    COLUMN_5
8739.0  6ea059a29eccecee4e250414    60.0    MAXIMACASH (MAXCAS...)  2900.0  219.06
138.0   1c6bc8e962427deb4106ae06    50.0    Charge (Charge) 525.0   -68.49
860.0   31b5c07636dab8f0909dbd2d    40.0    Buff Unicorn (BUFFUN...)    566.67  272.29
200.0   9e4d81c8fc15870b15aef8dc    30.0    BABY BNB (BBNB) 900.0   -28.32
20.0    5esdsds2sd15870b15aef8dc    30.0    CharliesAngel (CA)      
1560.0  c15e89f2149bcc0cbd5fb204    20.0    HUH_Token (HUH) 400.0   11.75
-----------------------
from convtools import conversion as c
from convtools.contrib.tables import Table

# your percentage change calculation
def c_change(column_name):
    return c.if_(
        c.and_(
            c.col(f"{column_name}_LEFT"),
            c.col(f"{column_name}_RIGHT").is_not(None),
        ),
        (
            (c.col(f"{column_name}_RIGHT") - c.col(f"{column_name}_LEFT"))
            / c.col(f"{column_name}_LEFT")
            * 100.0
        ).pipe(round, 2),
        None,
    )

prepare_columns = {
    "COLUMN_0": c.col("COLUMN_0").as_type(float),
    "COLUMN_2": c.col("COLUMN_2").as_type(float),
}
dialect = Table.csv_dialect(delimiter="\t")

sorted_rows = sorted(
    Table.from_csv("tmp1.csv", dialect=dialect)
    .update(**prepare_columns)
    .join(
        Table.from_csv(
            "tmp2.csv",
            dialect=dialect,
        ).update(**prepare_columns),
        on=["COLUMN_1", "COLUMN_3"],
        how="full",
    )
    .update(
        COLUMN_4=c_change("COLUMN_2"),
        COLUMN_5=c_change("COLUMN_0"),
        COLUMN_2=c.col("COLUMN_2_RIGHT"),
        COLUMN_0=c.col("COLUMN_0_RIGHT"),
    )
    .take(
        "COLUMN_0",
        "COLUMN_1",
        "COLUMN_2",
        "COLUMN_3",
        "COLUMN_4",
        "COLUMN_5",
    )
    .into_iter_rows(tuple),
    key=lambda row: row[2],
    reverse=True,
)

Table.from_rows(sorted_rows).into_csv("tmp_result.csv", dialect=dialect)

COLUMN_0    COLUMN_1    COLUMN_2    COLUMN_3    COLUMN_4    COLUMN_5
8739.0  6ea059a29eccecee4e250414    60.0    MAXIMACASH (MAXCAS...)  2900.0  219.06
138.0   1c6bc8e962427deb4106ae06    50.0    Charge (Charge) 525.0   -68.49
860.0   31b5c07636dab8f0909dbd2d    40.0    Buff Unicorn (BUFFUN...)    566.67  272.29
200.0   9e4d81c8fc15870b15aef8dc    30.0    BABY BNB (BBNB) 900.0   -28.32
20.0    5esdsds2sd15870b15aef8dc    30.0    CharliesAngel (CA)      
1560.0  c15e89f2149bcc0cbd5fb204    20.0    HUH_Token (HUH) 400.0   11.75

xcrun: error: SDK &quot;iphoneos&quot; cannot be located

copy iconCopydownload iconDownload
sudo xcode-select --switch /Applications/Xcode.app

Print first few and last few lines of file through a pipe with &quot;...&quot; in the middle

copy iconCopydownload iconDownload
(head -n 2; echo "..."; tail -n 2) < file
1
2
...
9
10
-----------------------
(head -n 2; echo "..."; tail -n 2) < file
1
2
...
9
10
-----------------------
awk -v top=2 -v bot=2 'FNR == NR {++n; next} FNR <= top || FNR > n-top; FNR == top+1 {print "..."}' file{,}

1
2
...
9
10
-----------------------
awk -v head=2 -v tail=2 'FNR==NR && FNR<=head
FNR==NR && cnt++==head {print "..."}
NR>FNR && FNR>(cnt-tail)' file file
perl -0777 -lanE 'BEGIN{$head=2; $tail=2;}
END{say join("\n", @F[0..$head-1],("..."),@F[-$tail..-1]);}' file   
awk -v head=2 -v tail=2 'FNR<=head
{lines[FNR]=$0}
END{
    print "..."
    for (i=FNR-tail+1; i<=FNR; i++) print lines[i]
}' file
head -2 file; echo "..."; tail -2 file
1
2
...
9
10
-----------------------
awk -v head=2 -v tail=2 'FNR==NR && FNR<=head
FNR==NR && cnt++==head {print "..."}
NR>FNR && FNR>(cnt-tail)' file file
perl -0777 -lanE 'BEGIN{$head=2; $tail=2;}
END{say join("\n", @F[0..$head-1],("..."),@F[-$tail..-1]);}' file   
awk -v head=2 -v tail=2 'FNR<=head
{lines[FNR]=$0}
END{
    print "..."
    for (i=FNR-tail+1; i<=FNR; i++) print lines[i]
}' file
head -2 file; echo "..."; tail -2 file
1
2
...
9
10
-----------------------
awk -v head=2 -v tail=2 'FNR==NR && FNR<=head
FNR==NR && cnt++==head {print "..."}
NR>FNR && FNR>(cnt-tail)' file file
perl -0777 -lanE 'BEGIN{$head=2; $tail=2;}
END{say join("\n", @F[0..$head-1],("..."),@F[-$tail..-1]);}' file   
awk -v head=2 -v tail=2 'FNR<=head
{lines[FNR]=$0}
END{
    print "..."
    for (i=FNR-tail+1; i<=FNR; i++) print lines[i]
}' file
head -2 file; echo "..."; tail -2 file
1
2
...
9
10
-----------------------
awk -v head=2 -v tail=2 'FNR==NR && FNR<=head
FNR==NR && cnt++==head {print "..."}
NR>FNR && FNR>(cnt-tail)' file file
perl -0777 -lanE 'BEGIN{$head=2; $tail=2;}
END{say join("\n", @F[0..$head-1],("..."),@F[-$tail..-1]);}' file   
awk -v head=2 -v tail=2 'FNR<=head
{lines[FNR]=$0}
END{
    print "..."
    for (i=FNR-tail+1; i<=FNR; i++) print lines[i]
}' file
head -2 file; echo "..."; tail -2 file
1
2
...
9
10
-----------------------
awk -v head=2 -v tail=2 'FNR==NR && FNR<=head
FNR==NR && cnt++==head {print "..."}
NR>FNR && FNR>(cnt-tail)' file file
perl -0777 -lanE 'BEGIN{$head=2; $tail=2;}
END{say join("\n", @F[0..$head-1],("..."),@F[-$tail..-1]);}' file   
awk -v head=2 -v tail=2 'FNR<=head
{lines[FNR]=$0}
END{
    print "..."
    for (i=FNR-tail+1; i<=FNR; i++) print lines[i]
}' file
head -2 file; echo "..."; tail -2 file
1
2
...
9
10
-----------------------
sed '1,2b
     3c\
...
     N
     $!D'
sed '1,2b
     3c\
...
     $!{h;d;}
     H;g'
-----------------------
sed '1,2b
     3c\
...
     N
     $!D'
sed '1,2b
     3c\
...
     $!{h;d;}
     H;g'
-----------------------
h=2 t=3

cat temp | awk -v head=${h} -v tail=${t} '
    { if (NR <= head) print $0
      lines[NR % tail] = $0
    }

END { print "..."

      if (NR < tail) i=0
      else           i=NR

      do { i=(i+1)%tail
           print lines[i]
         } while (i != (NR % tail) )
    }'
1
2
...
8
9
10
$ cat temp4
1
2
3
4
$ cat temp4 | awk -v head=${h} -v tail=${t} '...'
1
2
3
...
2
3
4
-----------------------
h=2 t=3

cat temp | awk -v head=${h} -v tail=${t} '
    { if (NR <= head) print $0
      lines[NR % tail] = $0
    }

END { print "..."

      if (NR < tail) i=0
      else           i=NR

      do { i=(i+1)%tail
           print lines[i]
         } while (i != (NR % tail) )
    }'
1
2
...
8
9
10
$ cat temp4
1
2
3
4
$ cat temp4 | awk -v head=${h} -v tail=${t} '...'
1
2
3
...
2
3
4
-----------------------
h=2 t=3

cat temp | awk -v head=${h} -v tail=${t} '
    { if (NR <= head) print $0
      lines[NR % tail] = $0
    }

END { print "..."

      if (NR < tail) i=0
      else           i=NR

      do { i=(i+1)%tail
           print lines[i]
         } while (i != (NR % tail) )
    }'
1
2
...
8
9
10
$ cat temp4
1
2
3
4
$ cat temp4 | awk -v head=${h} -v tail=${t} '...'
1
2
3
...
2
3
4
-----------------------
h=2 t=3

cat temp | awk -v head=${h} -v tail=${t} '
    { if (NR <= head) print $0
      lines[NR % tail] = $0
    }

END { print "..."

      if (NR < tail) i=0
      else           i=NR

      do { i=(i+1)%tail
           print lines[i]
         } while (i != (NR % tail) )
    }'
1
2
...
8
9
10
$ cat temp4
1
2
3
4
$ cat temp4 | awk -v head=${h} -v tail=${t} '...'
1
2
3
...
2
3
4

awk FS vs FPAT puzzle and counting words but not blank fields

copy iconCopydownload iconDownload
#!awk
{
    s = $0
    while (match(s, /[[:alpha:]]+/)) {
        word = substr(s, RSTART, RLENGTH)
        count[tolower(word)]++
        s = substr(s, RSTART+RLENGTH)
    }
}
END {
    for (word in count) print count[word], word
}
$ awk -f countwords.awk file
1 or
3 this
2 that
-----------------------
#!awk
{
    s = $0
    while (match(s, /[[:alpha:]]+/)) {
        word = substr(s, RSTART, RLENGTH)
        count[tolower(word)]++
        s = substr(s, RSTART+RLENGTH)
    }
}
END {
    for (word in count) print count[word], word
}
$ awk -f countwords.awk file
1 or
3 this
2 that
-----------------------
$ gawk -v RS="[^[:alpha:]]+" '  # [^a-zA-Z] or something for some awks
$0 {                            # remove possible leading null string
    a[tolower($0)]++
}
END {
    for(i in a)
        print i,a[i]
}' file
this 3
or 1
that 2
-----------------------
$ gawk -v RS="[^[:alpha:]]+" '  # [^a-zA-Z] or something for some awks
$0 {                            # remove possible leading null string
    a[tolower($0)]++
}
END {
    for(i in a)
        print i,a[i]
}' file
this 3
or 1
that 2
-----------------------
awk -v RS='[[:alpha:]]+' '
RT{
  val[tolower(RT)]++
}
END{
  for(word in val){
    print val[word], word
  }
}
' Input_file
-----------------------
awk 'patsplit($0, a, /[[:alpha:]]+/) {for (i in a) b[ tolower(a[i]) ]++} END {for (j in b) print b[j], j}' file
3 this
1 or
2 that
-----------------------
awk -F '[^[:alpha:]]+' '
{for (i=1; i<=NF; ++i) ($i != "") && ++count[tolower($i)]}
END {for (e in count) printf "%4s %s\n", count[e], e}' file

   1 or
   3 this
   2 that

How to use axios HttpService from Nest.js to make a POST request

copy iconCopydownload iconDownload
import { AxiosResponse } from 'axios'
-----------------------
this.httpService.post(url, data, options).pipe(
  tap((resp) => console.log(resp)),
  map((resp) => resp.data),
  tap((data) =>  console.log(data)),
);
const data = await lastValueFrom(
  this.httpService.post(url, data, options).pipe(
    map(resp => res.data)
  )
);
const requestConfig: AxiosRequestConfig = {
  headers: {
    'Content-Type': 'YOUR_CONTENT_TYPE_HEADER',
  },
  params: {
    param1: 'YOUR_VALUE_HERE'
  },
};

const responseData = await lastValueFrom(
  this.httpService.post(requestUrl, null, requestConfig).pipe(
    map((response) => {
      return response.data;
    }),
  ),
);
-----------------------
this.httpService.post(url, data, options).pipe(
  tap((resp) => console.log(resp)),
  map((resp) => resp.data),
  tap((data) =>  console.log(data)),
);
const data = await lastValueFrom(
  this.httpService.post(url, data, options).pipe(
    map(resp => res.data)
  )
);
const requestConfig: AxiosRequestConfig = {
  headers: {
    'Content-Type': 'YOUR_CONTENT_TYPE_HEADER',
  },
  params: {
    param1: 'YOUR_VALUE_HERE'
  },
};

const responseData = await lastValueFrom(
  this.httpService.post(requestUrl, null, requestConfig).pipe(
    map((response) => {
      return response.data;
    }),
  ),
);
-----------------------
this.httpService.post(url, data, options).pipe(
  tap((resp) => console.log(resp)),
  map((resp) => resp.data),
  tap((data) =>  console.log(data)),
);
const data = await lastValueFrom(
  this.httpService.post(url, data, options).pipe(
    map(resp => res.data)
  )
);
const requestConfig: AxiosRequestConfig = {
  headers: {
    'Content-Type': 'YOUR_CONTENT_TYPE_HEADER',
  },
  params: {
    param1: 'YOUR_VALUE_HERE'
  },
};

const responseData = await lastValueFrom(
  this.httpService.post(requestUrl, null, requestConfig).pipe(
    map((response) => {
      return response.data;
    }),
  ),
);

Using Docker-Desktop for Windows, how can sysctl parameters be configured to permeate a reboot?

copy iconCopydownload iconDownload
[wsl2]
kernelCommandLine = "sysctl.vm.max_map_count=262144"
> sysctl vm.max_map_count
vm.max_map_count = 262144
[boot]
command="sysctl -w vm.max_map_count=262144"
wsl.exe -d docker-desktop sh -c "sysctl -w vm.max_map_count=262144"
-----------------------
[wsl2]
kernelCommandLine = "sysctl.vm.max_map_count=262144"
> sysctl vm.max_map_count
vm.max_map_count = 262144
[boot]
command="sysctl -w vm.max_map_count=262144"
wsl.exe -d docker-desktop sh -c "sysctl -w vm.max_map_count=262144"
-----------------------
[wsl2]
kernelCommandLine = "sysctl.vm.max_map_count=262144"
> sysctl vm.max_map_count
vm.max_map_count = 262144
[boot]
command="sysctl -w vm.max_map_count=262144"
wsl.exe -d docker-desktop sh -c "sysctl -w vm.max_map_count=262144"
-----------------------
[wsl2]
kernelCommandLine = "sysctl.vm.max_map_count=262144"
> sysctl vm.max_map_count
vm.max_map_count = 262144
[boot]
command="sysctl -w vm.max_map_count=262144"
wsl.exe -d docker-desktop sh -c "sysctl -w vm.max_map_count=262144"

Given a Python list of lists, find all possible flat lists that keeps the order of each sublist?

copy iconCopydownload iconDownload
from itertools import permutations, chain

ll = [["D", "O", "G"], ["C", "A", "T"], ["F", "I", "S", "H"]]

x = [[(i1, i2, o) for i2, o in enumerate(subl)] for i1, subl in enumerate(ll)]
l = sum(len(subl) for subl in ll)


def is_valid(c):
    seen = {}
    for i1, i2, _ in c:
        if i2 != seen.get(i1, -1) + 1:
            return False
        else:
            seen[i1] = i2
    return True


for c in permutations(chain(*x), l):
    if is_valid(c):
        print([o for *_, o in c])
['D', 'O', 'G', 'C', 'A', 'T', 'F', 'I', 'S', 'H']
['D', 'O', 'G', 'C', 'A', 'F', 'T', 'I', 'S', 'H']
['D', 'O', 'G', 'C', 'A', 'F', 'I', 'T', 'S', 'H']
['D', 'O', 'G', 'C', 'A', 'F', 'I', 'S', 'T', 'H']
['D', 'O', 'G', 'C', 'A', 'F', 'I', 'S', 'H', 'T']
['D', 'O', 'G', 'C', 'F', 'A', 'T', 'I', 'S', 'H']
['D', 'O', 'G', 'C', 'F', 'A', 'I', 'T', 'S', 'H']
['D', 'O', 'G', 'C', 'F', 'A', 'I', 'S', 'T', 'H']

...

['F', 'I', 'S', 'H', 'C', 'D', 'A', 'O', 'T', 'G']
['F', 'I', 'S', 'H', 'C', 'D', 'A', 'T', 'O', 'G']
['F', 'I', 'S', 'H', 'C', 'A', 'D', 'O', 'G', 'T']
['F', 'I', 'S', 'H', 'C', 'A', 'D', 'O', 'T', 'G']
['F', 'I', 'S', 'H', 'C', 'A', 'D', 'T', 'O', 'G']
['F', 'I', 'S', 'H', 'C', 'A', 'T', 'D', 'O', 'G']
-----------------------
from itertools import permutations, chain

ll = [["D", "O", "G"], ["C", "A", "T"], ["F", "I", "S", "H"]]

x = [[(i1, i2, o) for i2, o in enumerate(subl)] for i1, subl in enumerate(ll)]
l = sum(len(subl) for subl in ll)


def is_valid(c):
    seen = {}
    for i1, i2, _ in c:
        if i2 != seen.get(i1, -1) + 1:
            return False
        else:
            seen[i1] = i2
    return True


for c in permutations(chain(*x), l):
    if is_valid(c):
        print([o for *_, o in c])
['D', 'O', 'G', 'C', 'A', 'T', 'F', 'I', 'S', 'H']
['D', 'O', 'G', 'C', 'A', 'F', 'T', 'I', 'S', 'H']
['D', 'O', 'G', 'C', 'A', 'F', 'I', 'T', 'S', 'H']
['D', 'O', 'G', 'C', 'A', 'F', 'I', 'S', 'T', 'H']
['D', 'O', 'G', 'C', 'A', 'F', 'I', 'S', 'H', 'T']
['D', 'O', 'G', 'C', 'F', 'A', 'T', 'I', 'S', 'H']
['D', 'O', 'G', 'C', 'F', 'A', 'I', 'T', 'S', 'H']
['D', 'O', 'G', 'C', 'F', 'A', 'I', 'S', 'T', 'H']

...

['F', 'I', 'S', 'H', 'C', 'D', 'A', 'O', 'T', 'G']
['F', 'I', 'S', 'H', 'C', 'D', 'A', 'T', 'O', 'G']
['F', 'I', 'S', 'H', 'C', 'A', 'D', 'O', 'G', 'T']
['F', 'I', 'S', 'H', 'C', 'A', 'D', 'O', 'T', 'G']
['F', 'I', 'S', 'H', 'C', 'A', 'D', 'T', 'O', 'G']
['F', 'I', 'S', 'H', 'C', 'A', 'T', 'D', 'O', 'G']
-----------------------
from itertools import permutations

Ls = [['D', 'O', 'G'], ['C', 'A', 'T']]
L_flattened = []
for L in Ls:
    for item in L:
        L_flattened.append(item)

print("L_flattened:", L_flattened)

print(list(permutations(L_flattened, len(L_flattened))))


[('D', 'O', 'G', 'C', 'A', 'T'), ('D', 'O', 'G', 'C', 'T', 'A'), ('D', 'O', 'G', 'A', 'C', 'T'), ('D', 'O', 'G', 'A', 'T', 'C'), ('D', 'O', 'G', 'T', 'C', 'A'), ('D', 'O', 'G', 'T', 'A', 'C'), ('D', 'O', 'C', 'G', 'A', 'T'), 
 ('D', 'O', 'C', 'G', 'T', 'A'), ('D', 'O', 'C', 'A', 'G', 'T'), ('D', 'O', 'C', 'A', 'T', 'G'),
 ...
-----------------------
set(itertools.permutations("RRRYYYGGGG"))
elements = []
for key, lst in enumerate(ll):
    elements.extend( [ key ] * len(lst))
pick_orders = set(itertools.permutations(elements))
-----------------------
set(itertools.permutations("RRRYYYGGGG"))
elements = []
for key, lst in enumerate(ll):
    elements.extend( [ key ] * len(lst))
pick_orders = set(itertools.permutations(elements))
-----------------------
import random
import itertools
import numpy as np

ll = [['D', 'O', 'G'], ['C', 'A', 'T'], ['F', 'I', 'S', 'H']]
flat = [x for l in ll for x in l]

all_permutations = list(itertools.permutations(flat))
good_permutations = []
count = 0
for perm in all_permutations:
  count += 1
  cond = True
  for l in ll:
    idxs = [perm.index(x) for i, x in enumerate(flat) if x in l]
    # check if ordered
    if not np.all(np.diff(np.array(idxs)) >= 0):
      cond = False
      break
  if cond == True:
    good_permutations.append(perm)
  if count >= 10000:
    break

print(len(good_permutations))
-----------------------
def recurse(lst, indices, total, curr):
    done = True
    for l, (pos, index) in zip(lst, enumerate(indices)):
        if index < len(l): # can increment index
            curr.append(l[index]) # add on corresponding value
            indices[pos] += 1 # increment index
            recurse(lst, indices, total, curr)
            # backtrack
            indices[pos] -= 1
            curr.pop()
            done = False # modification made, so not done

    if done: # no changes made
        total.append(curr.copy())

    return

def list_to_all_flat(lst):
    seq = [0] * len(lst) # set up indexes
    total, curr = [], []
    recurse(lst, seq, total, curr)
    return total

if __name__ == "__main__":
    lst = [['D', 'O', 'G'], ['C', 'A', 'T'], ['F', 'I', 'S', 'H']]
    print(list_to_all_flat(lst))

-----------------------
ll = [['D', 'O', 'G'], ['C', 'A', 'T'], ['F', 'I', 'S', 'H']]
def get_combos(d, c = []):
   if not any(d) and len(c) == sum(map(len, ll)):
       yield c
   elif any(d):
      for a, b in enumerate(d):
         for j, k in enumerate(b):
            yield from get_combos(d[:a]+[b[j+1:]]+d[a+1:], c+[k])

print(list(get_combos(ll)))
[['D', 'O', 'G', 'C', 'A', 'T', 'F', 'I', 'S', 'H'], ['D', 'O', 'G', 'C', 'A', 'F', 'T', 'I', 'S', 'H'], ['D', 'O', 'G', 'C', 'A', 'F', 'I', 'T', 'S', 'H'], ['D', 'O', 'G', 'C', 'A', 'F', 'I', 'S', 'T', 'H'], ['D', 'O', 'G', 'C', 'A', 'F', 'I', 'S', 'H', 'T'], ['D', 'O', 'G', 'C', 'F', 'A', 'T', 'I', 'S', 'H'], ['D', 'O', 'G', 'C', 'F', 'A', 'I', 'T', 'S', 'H'], ['D', 'O', 'G', 'C', 'F', 'A', 'I', 'S', 'T', 'H'], ['D', 'O', 'G', 'C', 'F', 'A', 'I', 'S', 'H', 'T'], ['D', 'O', 'G', 'C', 'F', 'I', 'A', 'T', 'S', 'H']]
-----------------------
ll = [['D', 'O', 'G'], ['C', 'A', 'T'], ['F', 'I', 'S', 'H']]
def get_combos(d, c = []):
   if not any(d) and len(c) == sum(map(len, ll)):
       yield c
   elif any(d):
      for a, b in enumerate(d):
         for j, k in enumerate(b):
            yield from get_combos(d[:a]+[b[j+1:]]+d[a+1:], c+[k])

print(list(get_combos(ll)))
[['D', 'O', 'G', 'C', 'A', 'T', 'F', 'I', 'S', 'H'], ['D', 'O', 'G', 'C', 'A', 'F', 'T', 'I', 'S', 'H'], ['D', 'O', 'G', 'C', 'A', 'F', 'I', 'T', 'S', 'H'], ['D', 'O', 'G', 'C', 'A', 'F', 'I', 'S', 'T', 'H'], ['D', 'O', 'G', 'C', 'A', 'F', 'I', 'S', 'H', 'T'], ['D', 'O', 'G', 'C', 'F', 'A', 'T', 'I', 'S', 'H'], ['D', 'O', 'G', 'C', 'F', 'A', 'I', 'T', 'S', 'H'], ['D', 'O', 'G', 'C', 'F', 'A', 'I', 'S', 'T', 'H'], ['D', 'O', 'G', 'C', 'F', 'A', 'I', 'S', 'H', 'T'], ['D', 'O', 'G', 'C', 'F', 'I', 'A', 'T', 'S', 'H']]

Community Discussions

Trending Discussions on cat
  • Error: require() of ES modules is not supported when importing node-fetch
  • The unauthenticated git protocol on port 9418 is no longer supported
  • Dataframe from a character vector where variable name and its data were stored jointly
  • Merge two files and add computation and sorting the updated data in python
  • xcrun: error: SDK &quot;iphoneos&quot; cannot be located
  • Print first few and last few lines of file through a pipe with &quot;...&quot; in the middle
  • awk FS vs FPAT puzzle and counting words but not blank fields
  • How to use axios HttpService from Nest.js to make a POST request
  • Using Docker-Desktop for Windows, how can sysctl parameters be configured to permeate a reboot?
  • Given a Python list of lists, find all possible flat lists that keeps the order of each sublist?
Trending Discussions on cat

QUESTION

Error: require() of ES modules is not supported when importing node-fetch

Asked 2022-Mar-28 at 07:04

I'm creating a program to analyze security camera streams and got stuck on the very first line. At the moment my .js file has nothing but the import of node-fetch and it gives me an error message. What am I doing wrong?

Running Ubuntu 20.04.2 LTS in Windows Subsystem for Linux.

Node version:

user@MYLLYTIN:~/CAMSERVER$ node -v
v14.17.6

node-fetch package version:

user@MYLLYTIN:~/CAMSERVER$ npm v node-fetch

node-fetch@3.0.0 | MIT | deps: 2 | versions: 63
A light-weight module that brings Fetch API to node.js
https://github.com/node-fetch/node-fetch

keywords: fetch, http, promise, request, curl, wget, xhr, whatwg

dist
.tarball: https://registry.npmjs.org/node-fetch/-/node-fetch-3.0.0.tgz
.shasum: 79da7146a520036f2c5f644e4a26095f17e411ea
.integrity: sha512-bKMI+C7/T/SPU1lKnbQbwxptpCrG9ashG+VkytmXCPZyuM9jB6VU+hY0oi4lC8LxTtAeWdckNCTa3nrGsAdA3Q==
.unpackedSize: 75.9 kB

dependencies:
data-uri-to-buffer: ^3.0.1 fetch-blob: ^3.1.2         

maintainers:
- endless <jimmy@warting.se>
- bitinn <bitinn@gmail.com>
- timothygu <timothygu99@gmail.com>
- akepinski <npm@kepinski.ch>

dist-tags:
latest: 3.0.0        next: 3.0.0-beta.10  

published 3 days ago by endless <jimmy@warting.se>

esm package version:

user@MYLLYTIN:~/CAMSERVER$ npm v esm

esm@3.2.25 | MIT | deps: none | versions: 140
Tomorrow's ECMAScript modules today!
https://github.com/standard-things/esm#readme

keywords: commonjs, ecmascript, export, import, modules, node, require

dist
.tarball: https://registry.npmjs.org/esm/-/esm-3.2.25.tgz
.shasum: 342c18c29d56157688ba5ce31f8431fbb795cc10
.integrity: sha512-U1suiZ2oDVWv4zPO56S0NcR5QriEahGtdN2OR6FiOG4WJvcjBVFB0qI4+eKoWFH483PKGuLuu6V8Z4T5g63UVA==
.unpackedSize: 308.6 kB

maintainers:
- jdalton <john.david.dalton@gmail.com>

dist-tags:
latest: 3.2.25  

published over a year ago by jdalton <john.david.dalton@gmail.com>

Contents of the .js file (literally nothing but the import):

user@MYLLYTIN:~/CAMSERVER$ cat server.js 
import fetch from "node-fetch";

Result:

user@MYLLYTIN:~/CAMSERVER$ node -r esm server.js 
/home/user/CAMSERVER/node_modules/node-fetch/src/index.js:1
Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: /home/user/CAMSERVER/node_modules/node-fetch/src/index.js
require() of ES modules is not supported.
require() of /home/user/CAMSERVER/node_modules/node-fetch/src/index.js from /home/user/CAMSERVER/server.js is an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which defines all .js files in that package scope as ES modules.
Instead rename index.js to end in .cjs, change the requiring code to use import(), or remove "type": "module" from /home/user/CAMSERVER/node_modules/node-fetch/package.json.

    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1089:13) {
  code: 'ERR_REQUIRE_ESM'
}
user@MYLLYTIN:~/CAMSERVER$ 

ANSWER

Answered 2022-Feb-25 at 00:00

Use ESM syntax, also use one of these methods before running the file.

  1. specify "type":"module" in package.json
  2. Or use this flag --input-type=module when running the file
  3. Or use .mjs file extension

Source https://stackoverflow.com/questions/69041454

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install cat

部署FAQ
集群部署
报表介绍
配置手册

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Consider Popular Monitoring Libraries
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.