kandi background
Explore Kits

cas | Apereo CAS Enterprise Single Sign On for all earthlings and beyond. | Authentication library

 by   apereo Java Version: v6.6.0-RC2 License: Apache-2.0

 by   apereo Java Version: v6.6.0-RC2 License: Apache-2.0

Download this library from

kandi X-RAY | cas Summary

cas is a Java library typically used in Security, Authentication, Spring Boot applications. cas has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can download it from GitHub, Maven.
Welcome to the home of the Central Authentication Service project, more commonly referred to as CAS. CAS is an enterprise multilingual single sign-on solution for the web and attempts to be a comprehensive platform for your authentication and authorization needs. CAS is an open and well-documented authentication protocol. The primary implementation of the protocol is an open-source Java server component by the same name hosted here, with support for a plethora of additional authentication protocols and features.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • cas has a highly active ecosystem.
  • It has 9377 star(s) with 3726 fork(s). There are 613 watchers for this library.
  • There were 10 major release(s) in the last 6 months.
  • cas has no issues reported. There are 1 open pull requests and 0 closed requests.
  • It has a negative sentiment in the developer community.
  • The latest version of cas is v6.6.0-RC2
cas Support
Best in #Authentication
Average in #Authentication
cas Support
Best in #Authentication
Average in #Authentication

quality kandi Quality

  • cas has 0 bugs and 0 code smells.
cas Quality
Best in #Authentication
Average in #Authentication
cas Quality
Best in #Authentication
Average in #Authentication

securitySecurity

  • cas has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • cas code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
cas Security
Best in #Authentication
Average in #Authentication
cas Security
Best in #Authentication
Average in #Authentication

license License

  • cas is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
cas License
Best in #Authentication
Average in #Authentication
cas License
Best in #Authentication
Average in #Authentication

buildReuse

  • cas releases are available to install and integrate.
  • Deployable package is available in Maven.
  • Build file is available. You can build the component from source.
  • It has 320237 lines of code, 17894 functions and 6345 files.
  • It has low code complexity. Code complexity directly impacts maintainability of the code.
cas Reuse
Best in #Authentication
Average in #Authentication
cas Reuse
Best in #Authentication
Average in #Authentication
Top functions reviewed by kandi - BETA

kandi has reviewed cas and discovered the below as its top functions. This is intended to give you an instant insight into cas implemented functionality, and help decide if they suit your requirements.

  • Export the actuators .
  • Translate oidc service .
  • Encodes a byte array to bytes .
  • New ldap connection configuration .
  • Configure saml client .
  • Process nested enum properties .
  • Gets state details .
  • Build jwt claims .
  • Reconciliation of OpenId Connections .
  • Main initialization bean .

cas Key Features

CAS v1, v2 and v3 Protocol

SAML v1 and v2 Protocol

OAuth v2 Protocol

OpenID & OpenID Connect Protocol

WS-Federation Passive Requestor Protocol

Authentication via JAAS, LDAP, RDBMS, X.509, Radius, SPNEGO, JWT, Remote, Apache Cassandra, Trusted, BASIC, Apache Shiro, MongoDb, Pac4J and more.

Delegated authentication to WS-FED, Facebook, Twitter, SAML IdP, OpenID, OpenID Connect, CAS and more.

Authorization via ABAC, Time/Date, REST, Internet2's Grouper and more.

HA clustered deployments via Hazelcast, Ehcache, JPA, Apache Cassandra, Memcached, Apache Ignite, MongoDb, Redis, DynamoDb, Couchbase and more.

Application registration backed by JSON, LDAP, YAML, Apache Cassandra, JPA, Couchbase, MongoDb, DynamoDb, Redis and more.

Multifactor authentication via Duo Security, YubiKey, RSA, Google Authenticator, U2F, WebAuthn and more.

Administrative UIs to manage logging, monitoring, statistics, configuration, client registration and more.

Global and per-application user interface theme and branding.

Password management and password policy enforcement.

Deployment options using Apache Tomcat, Jetty, Undertow, packaged and running as Docker containers.

Section title after content latex

copy iconCopydownload iconDownload
\documentclass[a4paper,fleqn,draft]{cas-sc}

\usepackage[authoryear,longnamesfirst]{natbib}  


\ExplSyntaxOn

\cs_set:Npn \__reset_tbl:
{
  \tl_set:Nx \l_tbl_pos_tl { h }
  \tl_set:Nx \l_tbl_cols_tl { 1 }
  \tl_set:Nn \l_tbl_align_tl { \centering }
  \skip_set:Nn \l_tbl_abovecap_skip { 6pt }
  \skip_set:Nn \l_tbl_belowcap_skip { 0pt }
  \skip_set:Nn \l_tbl_abovetbl_skip { 6pt }
  \skip_set:Nn \l_tbl_belowtbl_skip { 6pt }
  
}

\ExplSyntaxOff


\begin{document}


\section{Tables and Numerical Results} 
\label{app:tables-results}
\begin{table}[htbp]
    \caption{caption}
    \label{tab}
\end{table}
text


\end{document}

How to open/download MODIS data in XArray using OPeNDAP

copy iconCopydownload iconDownload
login_url = "https://opendap.cr.usgs.gov/opendap/hyrax/MOD13Q1.061/h01v10.ncml.ascii?YDim[0],XDim[0],time[0]"
dataset_url = 'https://opendap.cr.usgs.gov/opendap/hyrax/MOD13Q1.061/h01v10.ncml'
session = setup_session('my_username', 'my_password', check_url=login_url)
dataset = open_url(dataset_url, session=session)

Concatenating various dfs with different columns but removing repeats

copy iconCopydownload iconDownload
try:
    df_final = pd.concat([df_final,data_transposed], ignore_index=True, axis=0)
except Exception as e:
    print('Error:', e)
    print('df_final:', df_final.columns)
    print('data_transposed:', data_transposed.columns) 

ValueError: No gradients provided for any variable (TFCamemBERT)

copy iconCopydownload iconDownload
def map_func(x, y):
  return {'input_ids': x['input_ids'], 'attention_mask': x['attention_mask'], 'labels':y}

train_dataset = train_dataset.map(map_func)

How to group rows based on a condition in a dataframe with python pandas?

copy iconCopydownload iconDownload
df['EDAT'] = df['EDAT'].str.replace(r'(0 a 9|10 a 19)', '0 a 19', regex=True)
print(df)

# Output
         DATA     EDAT     ESDEVENIMENT        PAUTA  RECOMPTE
0  2021-10-10   0 a 19              Cas  No iniciada         6
1  2021-10-10   0 a 19              Cas     Completa         5
2  2021-10-10   0 a 19              Cas  No iniciada         6
3  2021-10-10  20 a 29              Cas     Completa         3
4  2021-10-10  20 a 29              Cas  No iniciada         4
5  2021-10-10  20 a 29  Hospitalització  No iniciada         2
6  2021-10-10  30 a 39              Cas     Completa         7
7  2021-10-10  30 a 39              Cas  No iniciada        10
8  2021-10-10  30 a 39              Cas      Parcial         1
9  2021-10-10  30 a 39  Hospitalització  No iniciada         2
-----------------------
df = pd.DataFrame({'DATA': ['2021-10-10',
 '2021-10-10',
 '2021-10-10',
 '2021-10-10',
 '2021-10-10',
 '2021-10-10',
 '2021-10-10',
 '2021-10-10',
 '2021-10-10',
 '2021-10-10'], 'EDAT':['0 a 9',
 '10 a 19',
 '10 a 19',
 '20 a 29',
 '20 a 29',
 '20 a 29',
 '30 a 39',
 '30 a 39',
 '30 a 39',
 '30 a 39'], 'ESDEVENIMENT': ['Cas',
 'Cas',
 'Cas',
 'Cas',
 'Cas',
 'Hospitalització',
 'Cas',
 'Cas',
 'Cas',
 'Hospitalització'], 'PAUTA': ['No iniciada',
 'Completa',
 'No iniciada',
 'Completa',
 'No iniciada',
 'No iniciada',
 'Completa',
 'No iniciada',
 'Parcial',
 'No iniciada'], 'RECOMPTE': [6,
 5,
 6,
 3,
 4,
 2,
 7,
 10,
 1,
 2]})

df.loc[0:2, 'EDAT'] = '0 a 19'

print(df) 


    DATA        EDAT    ESDEVENIMENT    PAUTA       RECOMPTE
0   2021-10-10  0 a 19  Cas             No iniciada 6
1   2021-10-10  0 a 19  Cas             Completa    5
2   2021-10-10  0 a 19  Cas             No iniciada 6
3   2021-10-10  20 a 29 Cas             Completa    3
4   2021-10-10  20 a 29 Cas             No iniciada 4
5   2021-10-10  20 a 29 Hospitalització No iniciada 2
6   2021-10-10  30 a 39 Cas             Completa    7
7   2021-10-10  30 a 39 Cas             No iniciada 10
8   2021-10-10  30 a 39 Cas             Parcial     1
9   2021-10-10  30 a 39 Hospitalització No iniciada 2

How do I find specific symbol+character combo in T-SQL specifically using LIKE

copy iconCopydownload iconDownload
SELECT *
FROM #AcctKeysInt
WHERE NewCustName LIKE '%-c[roaeh]%'
PATINDEX('%-c[roaeh]%', IntCustName) 
-----------------------
SELECT *
FROM #AcctKeysInt
WHERE NewCustName LIKE '%-c[roaeh]%'
PATINDEX('%-c[roaeh]%', IntCustName) 

CAS 6.4 REST Authentication to External Service - Missing type id property '@class'

copy iconCopydownload iconDownload
{
"@class": "org.apereo.cas.authentication.principal.SimplePrincipal",
"id": "abc",
"attributes": {
    "@class": "java.util.LinkedHashMap",
    "username": [
        "java.util.List",
        ["jackson"]
    ]
 }
}

How to access a specific properties from a complex JSON array object in React.js

copy iconCopydownload iconDownload
  useEffect(() => {
    axios
      .get(baseURL)
      .then((response) => {
        setData(JSON.parse(response.data));
      })
-----------------------
import React, { useState, useEffect } from "react";
import HelpOutlineIcon from "@mui/icons-material/HelpOutline";
import axios from "axios";
const VendorsDetail = () => {
  const [data, setData] = useState({});
  const baseURL =
    "http://127.0.0.1:8000/api/business_process/business-impact/vendor-product-detail";

  useEffect(() => {
    axios
      .get(baseURL)
      .then((response) => {
        setData(response.data);
      })
      .then(
        (response) => {},
        (err) => {
          alert("No Data To Show");
        }
      )
      .catch((err) => {
        return false;
      });
  }, []);
?
  const DisplayData = data?.result?.CVE_Items.map((vender) => {
    return (
      <tr>
        <td>{vendor?.cve?.CVE_data_meta?.ID}</td>
<td>{vendor?.cve?.description?.description_data?.[0]?.lang}</td>
<td>{vendor?.cve?.impact?.exploitabilityScore}</td>
<td>{vendor?.cve?.impact?.severity}</td>
<td>{vendor?.cve?.impact?.impactScore}</td>
      </tr>
    );
  });
  return (
    <div className="z-100">
      <div className="text-black">
        <div className="rounded overflow-hidden flex  justify-center items-center">
          <table class="table table-striped ">
            <thead>
              <tr>
                <th>ID</th>
                <th>DESCRIPTION DATA</th>
                <th>value</th>
                <th>exploitabilityScore</th>
                <th>severity</th>
                <th>impactScore</th>
              </tr>
            </thead>
            <tbody>{DisplayData}</tbody>
          </table>
        </div>
        <h1>{foo}</h1>
      </div>
    </div>
  );
};

export default VendorsDetail;
-----------------------
{
  "resultsPerPage":7,
  "startIndex":0,
  "totalResults":7,
  "result": [
    "CVE_data_type":"CVE",
    "CVE_data_format":"MITRE",
    "CVE_data_version":"4.0",
    "CVE_data_timestamp":"2021-12-15T08:44Z",
    "CVE_Items": [
      { 
        "cve": {
          "CVE_data_meta": { ID: "THE ID YOU WANT" },
          "description": {
            "description_data": [{ "lang" : "en", "value": "problem desc"}],
          },
        },
        "impact": {
          "baseMetricV2": {
            "cvssv2": {
              "severity":"HIGH",
              "exploitabilityScore":10,
              "impactScore":6.4,
            }
          }
        }
      },
     
    ]
  ]
}
const DisplayData = data?.result?.CVE_Items.map(({
  cve: { 
    CVE_data_meta: { ID }, 
    description: { description_data },
  },
  impact: {
    baseMetricV2: {
      cvssv2: {
        severity,
        exploitabilityScore,
        impactScore,
      }
    }
  }
}) => {
    return (
      <tr>
        <td>{ID}</td>
        <td>{description_data.map(({value}) =>value).join(",")}</td>
        <td>{exploitabilityScore}</td>
        <td>{severity}</td>
        <td>{impactScore}</td>
      </tr>
    );
});

-----------------------
{
  "resultsPerPage":7,
  "startIndex":0,
  "totalResults":7,
  "result": [
    "CVE_data_type":"CVE",
    "CVE_data_format":"MITRE",
    "CVE_data_version":"4.0",
    "CVE_data_timestamp":"2021-12-15T08:44Z",
    "CVE_Items": [
      { 
        "cve": {
          "CVE_data_meta": { ID: "THE ID YOU WANT" },
          "description": {
            "description_data": [{ "lang" : "en", "value": "problem desc"}],
          },
        },
        "impact": {
          "baseMetricV2": {
            "cvssv2": {
              "severity":"HIGH",
              "exploitabilityScore":10,
              "impactScore":6.4,
            }
          }
        }
      },
     
    ]
  ]
}
const DisplayData = data?.result?.CVE_Items.map(({
  cve: { 
    CVE_data_meta: { ID }, 
    description: { description_data },
  },
  impact: {
    baseMetricV2: {
      cvssv2: {
        severity,
        exploitabilityScore,
        impactScore,
      }
    }
  }
}) => {
    return (
      <tr>
        <td>{ID}</td>
        <td>{description_data.map(({value}) =>value).join(",")}</td>
        <td>{exploitabilityScore}</td>
        <td>{severity}</td>
        <td>{impactScore}</td>
      </tr>
    );
});

How to add a percentage computation in pandas result

copy iconCopydownload iconDownload
#sampledata.txt
df = pd.DataFrame(data={'col1': ['alpha', 'bravo', 'charlie', 'delta', 'echo','lima', 'falcon', 'echo', 'charlie', 'romeo', 'falcon'],
                        'col2': [1, 3, 1, 2, 5, 6, 3, 8, 10, 12, 5],
                        'col3': ['54,00.01', '500,000.00', '27,722.29 ($250.45)', '11 ($10)', '143,299.00 ($101)', '45.00181 ($38.9)', '0.1234', '145,300 ($125.01)', '252,336,733.383 ($492.06)', '980', '9.19'],
                        'col4': ['ABC DSW2S', 'ACDEF', 'DGAS-CAS', 'SWSDSASS-CCSSW', 'ACS34S1', 'FGF5GGD-DDD', 'DSS2SFS3', 'ACS34S1', 'DGAS-CAS', 'ASDS SSSS SDSD', 'DSS2SFS3']})
df['within_brackets'] = df['col3'].str.extract('.*\((.*)\).*') #Extract whats inside the brackets.
df['within_brackets'].replace('\$', '', regex=True, inplace=True)
df['col3'] = df['col3'].str.replace(r"(\s*\(.*\))|,", "", regex=True) #Extract whats outside the brackets
df.rename(columns={'col4': 'col5', 'within_brackets': 'col4'}, inplace=True)
df[['col3', 'col4']] = df[['col3', 'col4']].astype(float)

df = df.groupby(['col1', 'col5']).agg(col2 = pd.NamedAgg(column="col2", aggfunc="sum"),
                                      col3 = pd.NamedAgg(column="col3", aggfunc="sum"),
                                      col4 = pd.NamedAgg(column="col4", aggfunc="sum"),
                                      col6 = pd.NamedAgg(column="col2", aggfunc=pd.Series.pct_change)).reset_index()
df['col6'].fillna(0, inplace=True)
#print df here and you will get to know what output looks like till now.
df['col6'].fillna(0, inplace=True)
df['col6'] = df['col6'].apply(lambda x: f"{str(round(x[-1], 4) * 100)}%" if isinstance(x, np.ndarray) else f"{round(x, 4) * 100}%")
df = df[['col1', 'col2', 'col3', 'col4', 'col5', 'col6']]
df.sort_values(by=['col2'], ascending=False, inplace=True)
print(df)
      col1  col2          col3    col4            col5    col6
4     echo    13  2.885990e+05  226.01         ACS34S1   60.0%
7    romeo    12  9.800000e+02    0.00  ASDS SSSS SDSD      0%
2  charlie    11  2.523645e+08  742.51        DGAS-CAS  900.0%
5   falcon     8  9.313400e+00    0.00        DSS2SFS3  66.67%
6     lima     6  4.500181e+01   38.90     FGF5GGD-DDD      0%
1    bravo     3  5.000000e+05    0.00           ACDEF      0%
3    delta     2  1.100000e+01   10.00  SWSDSASS-CCSSW      0%
0    alpha     1  5.400010e+03    0.00       ABC DSW2S      0%
df = df.groupby(['col1', 'col5']).agg(col2 = pd.NamedAgg(column="col2", aggfunc="sum"),
                                      col3 = pd.NamedAgg(column="col3", aggfunc="sum"),
                                      col4 = pd.NamedAgg(column="col4", aggfunc="sum"),
                                      col6 = pd.NamedAgg(column="col2", aggfunc=pd.Series.pct_change)).reset_index()
df['col6'].fillna(0, inplace=True)
#print df here and you will get to know what output looks like till now.
df['col6'].fillna(0, inplace=True)
df['col6'] = df['col6'].apply(lambda x: f"{str(round(x[-1], 4) * 100)}%" if isinstance(x, np.ndarray) else f"{round(x, 4) * 100}%")
df['col4'] =  '($' + df['col4'].astype(str) + ')'
df = df[['col1', 'col2', 'col3', 'col4', 'col5', 'col6']]
-----------------------
#sampledata.txt
df = pd.DataFrame(data={'col1': ['alpha', 'bravo', 'charlie', 'delta', 'echo','lima', 'falcon', 'echo', 'charlie', 'romeo', 'falcon'],
                        'col2': [1, 3, 1, 2, 5, 6, 3, 8, 10, 12, 5],
                        'col3': ['54,00.01', '500,000.00', '27,722.29 ($250.45)', '11 ($10)', '143,299.00 ($101)', '45.00181 ($38.9)', '0.1234', '145,300 ($125.01)', '252,336,733.383 ($492.06)', '980', '9.19'],
                        'col4': ['ABC DSW2S', 'ACDEF', 'DGAS-CAS', 'SWSDSASS-CCSSW', 'ACS34S1', 'FGF5GGD-DDD', 'DSS2SFS3', 'ACS34S1', 'DGAS-CAS', 'ASDS SSSS SDSD', 'DSS2SFS3']})
df['within_brackets'] = df['col3'].str.extract('.*\((.*)\).*') #Extract whats inside the brackets.
df['within_brackets'].replace('\$', '', regex=True, inplace=True)
df['col3'] = df['col3'].str.replace(r"(\s*\(.*\))|,", "", regex=True) #Extract whats outside the brackets
df.rename(columns={'col4': 'col5', 'within_brackets': 'col4'}, inplace=True)
df[['col3', 'col4']] = df[['col3', 'col4']].astype(float)

df = df.groupby(['col1', 'col5']).agg(col2 = pd.NamedAgg(column="col2", aggfunc="sum"),
                                      col3 = pd.NamedAgg(column="col3", aggfunc="sum"),
                                      col4 = pd.NamedAgg(column="col4", aggfunc="sum"),
                                      col6 = pd.NamedAgg(column="col2", aggfunc=pd.Series.pct_change)).reset_index()
df['col6'].fillna(0, inplace=True)
#print df here and you will get to know what output looks like till now.
df['col6'].fillna(0, inplace=True)
df['col6'] = df['col6'].apply(lambda x: f"{str(round(x[-1], 4) * 100)}%" if isinstance(x, np.ndarray) else f"{round(x, 4) * 100}%")
df = df[['col1', 'col2', 'col3', 'col4', 'col5', 'col6']]
df.sort_values(by=['col2'], ascending=False, inplace=True)
print(df)
      col1  col2          col3    col4            col5    col6
4     echo    13  2.885990e+05  226.01         ACS34S1   60.0%
7    romeo    12  9.800000e+02    0.00  ASDS SSSS SDSD      0%
2  charlie    11  2.523645e+08  742.51        DGAS-CAS  900.0%
5   falcon     8  9.313400e+00    0.00        DSS2SFS3  66.67%
6     lima     6  4.500181e+01   38.90     FGF5GGD-DDD      0%
1    bravo     3  5.000000e+05    0.00           ACDEF      0%
3    delta     2  1.100000e+01   10.00  SWSDSASS-CCSSW      0%
0    alpha     1  5.400010e+03    0.00       ABC DSW2S      0%
df = df.groupby(['col1', 'col5']).agg(col2 = pd.NamedAgg(column="col2", aggfunc="sum"),
                                      col3 = pd.NamedAgg(column="col3", aggfunc="sum"),
                                      col4 = pd.NamedAgg(column="col4", aggfunc="sum"),
                                      col6 = pd.NamedAgg(column="col2", aggfunc=pd.Series.pct_change)).reset_index()
df['col6'].fillna(0, inplace=True)
#print df here and you will get to know what output looks like till now.
df['col6'].fillna(0, inplace=True)
df['col6'] = df['col6'].apply(lambda x: f"{str(round(x[-1], 4) * 100)}%" if isinstance(x, np.ndarray) else f"{round(x, 4) * 100}%")
df['col4'] =  '($' + df['col4'].astype(str) + ')'
df = df[['col1', 'col2', 'col3', 'col4', 'col5', 'col6']]
-----------------------
#sampledata.txt
df = pd.DataFrame(data={'col1': ['alpha', 'bravo', 'charlie', 'delta', 'echo','lima', 'falcon', 'echo', 'charlie', 'romeo', 'falcon'],
                        'col2': [1, 3, 1, 2, 5, 6, 3, 8, 10, 12, 5],
                        'col3': ['54,00.01', '500,000.00', '27,722.29 ($250.45)', '11 ($10)', '143,299.00 ($101)', '45.00181 ($38.9)', '0.1234', '145,300 ($125.01)', '252,336,733.383 ($492.06)', '980', '9.19'],
                        'col4': ['ABC DSW2S', 'ACDEF', 'DGAS-CAS', 'SWSDSASS-CCSSW', 'ACS34S1', 'FGF5GGD-DDD', 'DSS2SFS3', 'ACS34S1', 'DGAS-CAS', 'ASDS SSSS SDSD', 'DSS2SFS3']})
df['within_brackets'] = df['col3'].str.extract('.*\((.*)\).*') #Extract whats inside the brackets.
df['within_brackets'].replace('\$', '', regex=True, inplace=True)
df['col3'] = df['col3'].str.replace(r"(\s*\(.*\))|,", "", regex=True) #Extract whats outside the brackets
df.rename(columns={'col4': 'col5', 'within_brackets': 'col4'}, inplace=True)
df[['col3', 'col4']] = df[['col3', 'col4']].astype(float)

df = df.groupby(['col1', 'col5']).agg(col2 = pd.NamedAgg(column="col2", aggfunc="sum"),
                                      col3 = pd.NamedAgg(column="col3", aggfunc="sum"),
                                      col4 = pd.NamedAgg(column="col4", aggfunc="sum"),
                                      col6 = pd.NamedAgg(column="col2", aggfunc=pd.Series.pct_change)).reset_index()
df['col6'].fillna(0, inplace=True)
#print df here and you will get to know what output looks like till now.
df['col6'].fillna(0, inplace=True)
df['col6'] = df['col6'].apply(lambda x: f"{str(round(x[-1], 4) * 100)}%" if isinstance(x, np.ndarray) else f"{round(x, 4) * 100}%")
df = df[['col1', 'col2', 'col3', 'col4', 'col5', 'col6']]
df.sort_values(by=['col2'], ascending=False, inplace=True)
print(df)
      col1  col2          col3    col4            col5    col6
4     echo    13  2.885990e+05  226.01         ACS34S1   60.0%
7    romeo    12  9.800000e+02    0.00  ASDS SSSS SDSD      0%
2  charlie    11  2.523645e+08  742.51        DGAS-CAS  900.0%
5   falcon     8  9.313400e+00    0.00        DSS2SFS3  66.67%
6     lima     6  4.500181e+01   38.90     FGF5GGD-DDD      0%
1    bravo     3  5.000000e+05    0.00           ACDEF      0%
3    delta     2  1.100000e+01   10.00  SWSDSASS-CCSSW      0%
0    alpha     1  5.400010e+03    0.00       ABC DSW2S      0%
df = df.groupby(['col1', 'col5']).agg(col2 = pd.NamedAgg(column="col2", aggfunc="sum"),
                                      col3 = pd.NamedAgg(column="col3", aggfunc="sum"),
                                      col4 = pd.NamedAgg(column="col4", aggfunc="sum"),
                                      col6 = pd.NamedAgg(column="col2", aggfunc=pd.Series.pct_change)).reset_index()
df['col6'].fillna(0, inplace=True)
#print df here and you will get to know what output looks like till now.
df['col6'].fillna(0, inplace=True)
df['col6'] = df['col6'].apply(lambda x: f"{str(round(x[-1], 4) * 100)}%" if isinstance(x, np.ndarray) else f"{round(x, 4) * 100}%")
df['col4'] =  '($' + df['col4'].astype(str) + ')'
df = df[['col1', 'col2', 'col3', 'col4', 'col5', 'col6']]
-----------------------
#sampledata.txt
df = pd.DataFrame(data={'col1': ['alpha', 'bravo', 'charlie', 'delta', 'echo','lima', 'falcon', 'echo', 'charlie', 'romeo', 'falcon'],
                        'col2': [1, 3, 1, 2, 5, 6, 3, 8, 10, 12, 5],
                        'col3': ['54,00.01', '500,000.00', '27,722.29 ($250.45)', '11 ($10)', '143,299.00 ($101)', '45.00181 ($38.9)', '0.1234', '145,300 ($125.01)', '252,336,733.383 ($492.06)', '980', '9.19'],
                        'col4': ['ABC DSW2S', 'ACDEF', 'DGAS-CAS', 'SWSDSASS-CCSSW', 'ACS34S1', 'FGF5GGD-DDD', 'DSS2SFS3', 'ACS34S1', 'DGAS-CAS', 'ASDS SSSS SDSD', 'DSS2SFS3']})
df['within_brackets'] = df['col3'].str.extract('.*\((.*)\).*') #Extract whats inside the brackets.
df['within_brackets'].replace('\$', '', regex=True, inplace=True)
df['col3'] = df['col3'].str.replace(r"(\s*\(.*\))|,", "", regex=True) #Extract whats outside the brackets
df.rename(columns={'col4': 'col5', 'within_brackets': 'col4'}, inplace=True)
df[['col3', 'col4']] = df[['col3', 'col4']].astype(float)

df = df.groupby(['col1', 'col5']).agg(col2 = pd.NamedAgg(column="col2", aggfunc="sum"),
                                      col3 = pd.NamedAgg(column="col3", aggfunc="sum"),
                                      col4 = pd.NamedAgg(column="col4", aggfunc="sum"),
                                      col6 = pd.NamedAgg(column="col2", aggfunc=pd.Series.pct_change)).reset_index()
df['col6'].fillna(0, inplace=True)
#print df here and you will get to know what output looks like till now.
df['col6'].fillna(0, inplace=True)
df['col6'] = df['col6'].apply(lambda x: f"{str(round(x[-1], 4) * 100)}%" if isinstance(x, np.ndarray) else f"{round(x, 4) * 100}%")
df = df[['col1', 'col2', 'col3', 'col4', 'col5', 'col6']]
df.sort_values(by=['col2'], ascending=False, inplace=True)
print(df)
      col1  col2          col3    col4            col5    col6
4     echo    13  2.885990e+05  226.01         ACS34S1   60.0%
7    romeo    12  9.800000e+02    0.00  ASDS SSSS SDSD      0%
2  charlie    11  2.523645e+08  742.51        DGAS-CAS  900.0%
5   falcon     8  9.313400e+00    0.00        DSS2SFS3  66.67%
6     lima     6  4.500181e+01   38.90     FGF5GGD-DDD      0%
1    bravo     3  5.000000e+05    0.00           ACDEF      0%
3    delta     2  1.100000e+01   10.00  SWSDSASS-CCSSW      0%
0    alpha     1  5.400010e+03    0.00       ABC DSW2S      0%
df = df.groupby(['col1', 'col5']).agg(col2 = pd.NamedAgg(column="col2", aggfunc="sum"),
                                      col3 = pd.NamedAgg(column="col3", aggfunc="sum"),
                                      col4 = pd.NamedAgg(column="col4", aggfunc="sum"),
                                      col6 = pd.NamedAgg(column="col2", aggfunc=pd.Series.pct_change)).reset_index()
df['col6'].fillna(0, inplace=True)
#print df here and you will get to know what output looks like till now.
df['col6'].fillna(0, inplace=True)
df['col6'] = df['col6'].apply(lambda x: f"{str(round(x[-1], 4) * 100)}%" if isinstance(x, np.ndarray) else f"{round(x, 4) * 100}%")
df['col4'] =  '($' + df['col4'].astype(str) + ')'
df = df[['col1', 'col2', 'col3', 'col4', 'col5', 'col6']]
-----------------------
def compute_percentage(row):
    vl = [float(parts[1]) for parts in dl if parts[0] == row['col1']]
    i = round(100. * (vl[-1]-vl[0])/vl[0] if vl[0] != 0 else 0, 2)
    if float(int(i)) == i:
        i = int(i)
    return str(i) + '%'

df['col6'] = df.apply(compute_percentage, axis=1)
      col1  col2          col3       col4            col5    col6
4     echo    13  2.885990e+05  ($226.01)         ACS34S1     60%
7    romeo    12  9.800000e+02     ($0.0)  ASDS SSSS SDSD      0%
2  charlie    11  2.523645e+08  ($742.51)        DGAS-CAS    900%
5   falcon     8  9.313400e+00     ($0.0)        DSS2SFS3  66.67%
6     lima     6  4.500181e+01    ($38.9)     FGF5GGD-DDD      0%
1    bravo     3  5.000000e+05     ($0.0)           ACDEF      0%
3    delta     2  1.100000e+01    ($10.0)  SWSDSASS-CCSSW      0%
0    alpha     1  5.400010e+03     ($0.0)       ABC DSW2S      0%
-----------------------
def compute_percentage(row):
    vl = [float(parts[1]) for parts in dl if parts[0] == row['col1']]
    i = round(100. * (vl[-1]-vl[0])/vl[0] if vl[0] != 0 else 0, 2)
    if float(int(i)) == i:
        i = int(i)
    return str(i) + '%'

df['col6'] = df.apply(compute_percentage, axis=1)
      col1  col2          col3       col4            col5    col6
4     echo    13  2.885990e+05  ($226.01)         ACS34S1     60%
7    romeo    12  9.800000e+02     ($0.0)  ASDS SSSS SDSD      0%
2  charlie    11  2.523645e+08  ($742.51)        DGAS-CAS    900%
5   falcon     8  9.313400e+00     ($0.0)        DSS2SFS3  66.67%
6     lima     6  4.500181e+01    ($38.9)     FGF5GGD-DDD      0%
1    bravo     3  5.000000e+05     ($0.0)           ACDEF      0%
3    delta     2  1.100000e+01    ($10.0)  SWSDSASS-CCSSW      0%
0    alpha     1  5.400010e+03     ($0.0)       ABC DSW2S      0%

PowerShell XML Multiple ChildNodes Comma Separated Strings to Array

copy iconCopydownload iconDownload
$fxsNumber = $test.FXSHotlineNumberList.TrimStart(',').Split(',')
# Since I don't have the original XML I'm using this to reproduce the object you already have.
$test = [pscustomobject]@{
    ApplyToChannelList = '1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,1.9,1.10,1.11,1.12,1.13,1.14,1.15,1.16,1.17,1.18,1.19,1.20,1.21,1.22,1.23,1.24,2.1,2.2,2.3,2.4,2.5,2.6,2.7,2.8,2.9,2.10,2.11,2.12,2.13,2.14,2.15,2.16,2.17,2.18,2.19,2.20'
    ChannelOwnNumberList = '71750,51375,53004,53146,53940,58940,70153,70309,71125,71681,75033,75021,75027,75015,75104,75120,75126,75145,75151,75320,75326,75351,75380,75449,52417,52895,75393,58627,53154,58473,51932,51871,53022,53595,19144,19152,19127,19137,19101,19105,19112,19117,53410,54952'
    FXSHotlineNumberList = ',,,,,,,,,,56072,56072,56072,56072,56072,,,,56072,,,,,,,,,,,,,,,,,,,,,,,,,'
}

$chnlist = $test.ApplyToChannelList.Split(',')
$chnOwnNumber = $test.ChannelOwnNumberList.Split(',')
$fxsNumber = $test.FXSHotlineNumberList.Split(',')

$count = ($chnlist.Count,$chnOwnNumber.Count,$fxsNumber.Count | Measure-Object -Maximum).Maximum

$result = for($i = 0; $i -lt $count; $i++)
{
    [pscustomobject]@{
        ApplyToChannelList = $chnlist[$i]
        ChannelOwnNumberList = $chnOwnNumber[$i]
        FXSHotlineNumberList = $fxsNumber[$i]
    }
}
PS /> $result | Select-Object -First 20

ApplyToChannelList ChannelOwnNumberList FXSHotlineNumberList
------------------ -------------------- --------------------
1.1                71750                                    
1.2                51375                                    
1.3                53004                                    
1.4                53146                                    
1.5                53940                                    
1.6                58940                                    
1.7                70153                                    
1.8                70309                                    
1.9                71125                                    
1.10               71681                                    
1.11               75033                56072               
1.12               75021                56072               
1.13               75027                56072               
1.14               75015                56072               
1.15               75104                56072               
1.16               75120                                    
1.17               75126                                    
1.18               75145                                    
1.19               75151                56072               
1.20               75320                                    
-----------------------
$fxsNumber = $test.FXSHotlineNumberList.TrimStart(',').Split(',')
# Since I don't have the original XML I'm using this to reproduce the object you already have.
$test = [pscustomobject]@{
    ApplyToChannelList = '1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,1.9,1.10,1.11,1.12,1.13,1.14,1.15,1.16,1.17,1.18,1.19,1.20,1.21,1.22,1.23,1.24,2.1,2.2,2.3,2.4,2.5,2.6,2.7,2.8,2.9,2.10,2.11,2.12,2.13,2.14,2.15,2.16,2.17,2.18,2.19,2.20'
    ChannelOwnNumberList = '71750,51375,53004,53146,53940,58940,70153,70309,71125,71681,75033,75021,75027,75015,75104,75120,75126,75145,75151,75320,75326,75351,75380,75449,52417,52895,75393,58627,53154,58473,51932,51871,53022,53595,19144,19152,19127,19137,19101,19105,19112,19117,53410,54952'
    FXSHotlineNumberList = ',,,,,,,,,,56072,56072,56072,56072,56072,,,,56072,,,,,,,,,,,,,,,,,,,,,,,,,'
}

$chnlist = $test.ApplyToChannelList.Split(',')
$chnOwnNumber = $test.ChannelOwnNumberList.Split(',')
$fxsNumber = $test.FXSHotlineNumberList.Split(',')

$count = ($chnlist.Count,$chnOwnNumber.Count,$fxsNumber.Count | Measure-Object -Maximum).Maximum

$result = for($i = 0; $i -lt $count; $i++)
{
    [pscustomobject]@{
        ApplyToChannelList = $chnlist[$i]
        ChannelOwnNumberList = $chnOwnNumber[$i]
        FXSHotlineNumberList = $fxsNumber[$i]
    }
}
PS /> $result | Select-Object -First 20

ApplyToChannelList ChannelOwnNumberList FXSHotlineNumberList
------------------ -------------------- --------------------
1.1                71750                                    
1.2                51375                                    
1.3                53004                                    
1.4                53146                                    
1.5                53940                                    
1.6                58940                                    
1.7                70153                                    
1.8                70309                                    
1.9                71125                                    
1.10               71681                                    
1.11               75033                56072               
1.12               75021                56072               
1.13               75027                56072               
1.14               75015                56072               
1.15               75104                56072               
1.16               75120                                    
1.17               75126                                    
1.18               75145                                    
1.19               75151                56072               
1.20               75320                                    
-----------------------
$fxsNumber = $test.FXSHotlineNumberList.TrimStart(',').Split(',')
# Since I don't have the original XML I'm using this to reproduce the object you already have.
$test = [pscustomobject]@{
    ApplyToChannelList = '1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,1.9,1.10,1.11,1.12,1.13,1.14,1.15,1.16,1.17,1.18,1.19,1.20,1.21,1.22,1.23,1.24,2.1,2.2,2.3,2.4,2.5,2.6,2.7,2.8,2.9,2.10,2.11,2.12,2.13,2.14,2.15,2.16,2.17,2.18,2.19,2.20'
    ChannelOwnNumberList = '71750,51375,53004,53146,53940,58940,70153,70309,71125,71681,75033,75021,75027,75015,75104,75120,75126,75145,75151,75320,75326,75351,75380,75449,52417,52895,75393,58627,53154,58473,51932,51871,53022,53595,19144,19152,19127,19137,19101,19105,19112,19117,53410,54952'
    FXSHotlineNumberList = ',,,,,,,,,,56072,56072,56072,56072,56072,,,,56072,,,,,,,,,,,,,,,,,,,,,,,,,'
}

$chnlist = $test.ApplyToChannelList.Split(',')
$chnOwnNumber = $test.ChannelOwnNumberList.Split(',')
$fxsNumber = $test.FXSHotlineNumberList.Split(',')

$count = ($chnlist.Count,$chnOwnNumber.Count,$fxsNumber.Count | Measure-Object -Maximum).Maximum

$result = for($i = 0; $i -lt $count; $i++)
{
    [pscustomobject]@{
        ApplyToChannelList = $chnlist[$i]
        ChannelOwnNumberList = $chnOwnNumber[$i]
        FXSHotlineNumberList = $fxsNumber[$i]
    }
}
PS /> $result | Select-Object -First 20

ApplyToChannelList ChannelOwnNumberList FXSHotlineNumberList
------------------ -------------------- --------------------
1.1                71750                                    
1.2                51375                                    
1.3                53004                                    
1.4                53146                                    
1.5                53940                                    
1.6                58940                                    
1.7                70153                                    
1.8                70309                                    
1.9                71125                                    
1.10               71681                                    
1.11               75033                56072               
1.12               75021                56072               
1.13               75027                56072               
1.14               75015                56072               
1.15               75104                56072               
1.16               75120                                    
1.17               75126                                    
1.18               75145                                    
1.19               75151                56072               
1.20               75320                                    

Community Discussions

Trending Discussions on cas
  • Section title after content latex
  • How to open/download MODIS data in XArray using OPeNDAP
  • Trusting individual invalid certs in mitmproxy
  • Is it possible to instrument a program that also uses dynamic bytecode generation?
  • A lag function honors how data are sorted, but crosses a group, and this I do not want: I must be mistunderstanding how window and lag functions work
  • Concatenating various dfs with different columns but removing repeats
  • ValueError: No gradients provided for any variable (TFCamemBERT)
  • How to group rows based on a condition in a dataframe with python pandas?
  • How do I find specific symbol+character combo in T-SQL specifically using LIKE
  • CAS 6.4 REST Authentication to External Service - Missing type id property '@class'
Trending Discussions on cas

QUESTION

Section title after content latex

Asked 2022-Mar-30 at 13:49

I am having a problem with LaTeX, and in particular with an appendix title. I have a main.tex file, and several .tex files (one for each section) that are then included in the main.tex file through \input{namefile.tex} command.

However, the appendix section has a problem: the title of the appendix goes under the first three tables that are inside it (all the table contents are hidden): anonimyzed image with the title under tables

The code of the main.tex file is as follows:

\documentclass[a4paper,fleqn]{cas-sc}
\UseRawInputEncoding

\usepackage[authoryear,longnamesfirst]{natbib}  

% Needed for subfigures
\usepackage{subcaption}

% Needed for better tables
\usepackage{tabularx}
\newcolumntype{g}{X}                                % big column
\newcolumntype{s}{>{\hsize=.5\hsize}X}              % small column
\newcolumntype{Y}{>{\centering\arraybackslash}X}    % centered column in tabularx environment

\usepackage{enumitem}

%%%Author macros
\def\tsc#1{\csdef{#1}{\textsc{\lowercase{#1}}\xspace}}
\tsc{WGM}
\tsc{QE}
%%%

\begin{document}
\let\WriteBookmarks\relax
\def\floatpagepagefraction{1}
\def\textpagefraction{.001}

% Short title
\shorttitle{shorttitle}    

% Short author
\shortauthors{anonymous}  

% Main title of the paper
\title[mode=title]{maintitle}  

% Address/affiliation 
\affiliation[aff]{organization={org},
            %addressline={}, 
            city={city},
%          citysep={}, % Uncomment if no comma needed between city and postcode
            postcode={pk}, 
            %state={st},
            country={c}
            }

% First author
%
% Options: Use if required
%\author[<aff no>]{<author name>}[<options>]

\author[aff]{anonymous}[type=editor]
% Corresponding author indication
\cormark[1]
% Footnote of the first author
%\fnmark[1]
% Email id of the author
\ead{mail}

% Corresponding author text
\cortext[1]{Corresponding author}

\begin{abstract}
Abstract
\end{abstract}

% Keywords
% Each keyword is separated by \sep
\begin{keywords}
kw1 \sep kw2
\end{keywords}

\maketitle

%*************************************************************
% Mainmatter
%*************************************************************

%\input{sections/01}
%\input{sections/02}
%\input{sections/03}
%\input{sections/04}

%% Loading bibliography style file
\bibliographystyle{cas-model2-names}

% Loading bibliography database
\bibliography{bibliography.bib}

\clearpage

%% The Appendices part is started with the command \appendix;
%% appendix sections are then done as normal sections
\appendix
\include{sections/appendix}

\end{document}

While the appendix.tex code is as follows (at the beginning):

\section{Tables and Numerical Results} 
\label{app:tables-results}

% Number the appendix tables as A.1, A.2, ...
\setcounter{table}{0}
\renewcommand{\thetable}{A.\arabic{table}}

\begin{table}[ht]
    \centering
    \renewcommand{\arraystretch}{1.5}
    \begin{tabularx}{\columnwidth}{XYY}
        \hline 
        &&\\
        \hline
        &&\\
        &&\\
        &&\\
        &&\\
        &&\\
        &&\\
        &&\\
        &&\\
        &&\\
        \hline
    \end{tabularx}
    \caption{caption}
    \label{tab}
\end{table}

All the other tables are pretty much the same.

I have already tried in changing the [ht] option of the table, by using both [h] and [h!], but without any result.

Could anyone help me?

ANSWER

Answered 2022-Mar-30 at 13:49

The class you are using defines tables and figures in such a way that their default position is only at the top of the page. You can hack the code like this:

\documentclass[a4paper,fleqn,draft]{cas-sc}

\usepackage[authoryear,longnamesfirst]{natbib}  


\ExplSyntaxOn

\cs_set:Npn \__reset_tbl:
{
  \tl_set:Nx \l_tbl_pos_tl { h }
  \tl_set:Nx \l_tbl_cols_tl { 1 }
  \tl_set:Nn \l_tbl_align_tl { \centering }
  \skip_set:Nn \l_tbl_abovecap_skip { 6pt }
  \skip_set:Nn \l_tbl_belowcap_skip { 0pt }
  \skip_set:Nn \l_tbl_abovetbl_skip { 6pt }
  \skip_set:Nn \l_tbl_belowtbl_skip { 6pt }
  
}

\ExplSyntaxOff


\begin{document}


\section{Tables and Numerical Results} 
\label{app:tables-results}
\begin{table}[htbp]
    \caption{caption}
    \label{tab}
\end{table}
text


\end{document}

Source https://stackoverflow.com/questions/71675466

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install cas

You can download it from GitHub, Maven.
You can use cas like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the cas component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

Support

If you have already identified an enhancement or a bug, it is STRONGLY recommended that you submit a pull request to address the case. There is no need for special ceremony to create separate issues. The pull request IS the issue and it will be tracked and tagged as such.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Consider Popular Authentication Libraries
Compare Authentication Libraries with Highest Support
Compare Authentication Libraries with Highest Quality
Compare Authentication Libraries with Highest Security
Compare Authentication Libraries with Permissive License
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.