Cybersecurity is security as it is applied to information technology. This includes all technology that stores, manipulates, or moves data, such as computers, data networks, and all devices connected to or included in networks, such as routers and switches. All information technology devices and facilities need to be secured against intrusion, unauthorized use, and vandalism. Additionally, the users of information technology should be protected from theft of assets, extortion, identity theft, loss of privacy and confidentiality of personal information, malicious mischief, damage to equipment, business process compromise, and the general activity of cybercriminals.
Popular New Releases in Cybersecurity
Amass
v3.19.1
juice-shop
v13.2.2
pyWhat
5.1.0 - New & Better regex ✨
juice-shop
v12.8.1
grr
GRR release 3.4.5.1
Popular Libraries in Cybersecurity
by OWASP python
19686 NOASSERTION
The OWASP Cheat Sheet Series was created to provide a concise collection of high value information on specific application security topics.
by OWASP go
6903 Apache-2.0
In-depth Attack Surface Mapping and Asset Discovery
by juice-shop typescript
6608 MIT
OWASP Juice Shop: Probably the most modern and sophisticated insecure web application
by mytechnotalent c
5434 Apache-2.0
A FREE comprehensive reverse engineering tutorial covering x86, x64, 32-bit ARM & 64-bit ARM architectures.
by bee-san python
5040 MIT
🐸 Identify anything. pyWhat easily lets you identify emails, IP addresses, and more. Feed it a .pcap file or some text and it'll tell you what it is! 🧙♀️
by bkimminich typescript
4913 MIT
OWASP Juice Shop: Probably the most modern and sophisticated insecure web application
by google python
4082 Apache-2.0
GRR Rapid Response: remote live forensics for incident response
by MISP php
3701 AGPL-3.0
MISP (core software) - Open Source Threat Intelligence and Sharing Platform
by OWASP html
2809 NOASSERTION
Official OWASP Top 10 Document Repository
Trending New libraries in Cybersecurity
by mytechnotalent c
5434 Apache-2.0
A FREE comprehensive reverse engineering tutorial covering x86, x64, 32-bit ARM & 64-bit ARM architectures.
by bee-san python
5040 MIT
🐸 Identify anything. pyWhat easily lets you identify emails, IP addresses, and more. Feed it a .pcap file or some text and it'll tell you what it is! 🧙♀️
by optiv go
1785 MIT
ScareCrow - Payload creation framework designed around EDR bypass.
by coreruleset python
960 Apache-2.0
OWASP ModSecurity Core Rule Set (Official Repository)
by fireeye powershell
855 NOASSERTION
Threat Pursuit Virtual Machine (VM): A fully customizable, open-sourced Windows-based distribution focused on threat intelligence analysis and hunting designed for intel and malware analysts as well as threat hunters to get up and running quickly.
by spieglt c
807 GPL-3.0
Log what files are accessed by any Linux process
by zodiacon c++
737 MIT
Registry Explorer - enhanced Registry editor/viewer
by Apr4h csharp
578 MIT
Scan files or process memory for CobaltStrike beacons and parse their configuration
by thalesgroup-cert python
542 AGPL-3.0
Watcher - Open Source Cybersecurity Threat Hunting Platform. Developed with Django & React JS.
Top Authors in Cybersecurity
1
97 Libraries
33309
2
9 Libraries
445
3
8 Libraries
4296
4
6 Libraries
3677
5
6 Libraries
32
6
5 Libraries
6201
7
4 Libraries
51
8
4 Libraries
34
9
4 Libraries
119
10
4 Libraries
9
1
97 Libraries
33309
2
9 Libraries
445
3
8 Libraries
4296
4
6 Libraries
3677
5
6 Libraries
32
6
5 Libraries
6201
7
4 Libraries
51
8
4 Libraries
34
9
4 Libraries
119
10
4 Libraries
9
Trending Kits in Cybersecurity
Python Digital Forensics Libraries are Python modules, functions, and script collections. It offers capabilities and tools for forensic investigators to analyze digital evidence. These libraries offer various features for helping investigators. It offers various aspects of digital forensics. It includes memory forensics, malware analysis, and file system analysis.
These libraries offer tools for analyzing file systems and disk images. It will allow investigators to examine directories, files, and other data stored. These libraries offer tools for analyzing the memory of a memory dump or a live system. It will allow investigators to extract information. It helps with information about network connections, running processes, and other system data. These libraries provide tools for analyzing binary files. It will allow us to disassemble and analyze malware and other malicious code. These libraries provide tools for analyzing network traffic. It will allow us to capture and examine packets for evidence. We have to check about evidence of malicious activity or data exfiltration. These libraries offer tools for analyzing and decrypting encrypted communications and data. These offer tools for recovering deleted files and other data.
Here are the 7 best Python Digital Forensics Libraries handpicked for developers:
beagle:
- Is an open source library that offers incident response and digital forensics tools.
- Is designed to help investigators automate common forensic tasks and analyze large data.
- Offers tools for analyzing disk images and file systems.
- Allow us to examine the system's directories, files, and other data.
Digital-Forensics-Guide:
- Is a Python package that offers tools for incident response and digital forensics.
- Includes memory forensics, malware analysis, file system analysis, and network analysis.
- Includes notebooks and scripts demonstrating how to analyze disk images and file systems.
- Offers various techniques and tools.
- Offers tools for analyzing digital evidence and identifying potential indicators of compromise.
ThePhish:
- Is an automated phishing email analysis tool based on MISP, TheHive, and Cortex.
- Automates the entire analysis process starting from the extraction of the observables.
- Will start from the header to the body of an email to the elaboration of a final verdict in most cases.
- Allows the analyst to intervene in the analysis process and get further details.
dfirtrack:
- Is a web application designed for Digital Forensics and Incident Response teams.
- It will help manage and track the progress of their investigations.
- Offers a centralized platform for managing different investigations.
- Supports investigations like case updating, closing, and creation.
- Enables you to track and manage all digital evidence related to a particular case.
- Track evidence like associated metadata and storage locations.
Cortex-Analyzers:
- Offers a collection of analyzers for use with Cortex and TheHive platforms.
- Is a collaborative incident response platform for tracking and managing security incidents.
- Helps analyze file types, identify potential threats, and extract metadata.
- Helps analyze and identify malicious activity, detect data exfiltration, and analyze network traffic.
Forensic-Tools:
- Used for parsing Firefox profile databases.
- Can help extract cookies, Google searches, and history.
- Used for analyzing Facebook app and messenger, still new and currently tested.
- Can extract messages with links, contacts, time, and attachments.
- Helps with profile pictures and links.
- Can extract account details, call logs, messages, and contacts with their full details.
kobackupdec:
- Is a Python library for decrypting backups.
- Can be created by the KNOX security feature on Samsung devices.
- Allows forensic investigators to extract data from encrypted backups.
- Enables them to perform digital forensics analysis on the extracted data.
- Uses a brute-force approach to decrypt the encrypted backup files.
The Career Path of a Cybersecurity Analyst
Cybersecurity is at the forefront of the digital era, and its significance is growing as our world becomes more dependent on technology. Amongst the professionals responsible for protecting our digital worlds, cybersecurity analysts are the vigilant protectors. They play an essential role in the detection, mitigation, and prevention of cyber risks, making them essential resources for organizations in a variety of industries.
If you’re looking to get into cybersecurity or just want to learn more about the field this article will help you understand who a cybersecurity analyst is and why it’s so important. We will look at the educational requirements, skills you need to have, and the career paths you can take if you want to join the ever-growing field of cybersecurity.
The Role of a Cybersecurity Analyst
What does a Cybersecurity analyst do?
A cybersecurity analyst is responsible for keeping your computer system, network, and data safe from cyber threats and weaknesses. They keep an eye out for security breaches, evaluate potential risks, create security plans, and put measures in place to keep important information secure, honest and accessible.
The key responsibilities of a cybersecurity analyst includes,
Threat Detection and Analysis:
One of a cybersecurity analyst’s primary responsibilities is to keep an eye on network traffic, system records, and security notifications. This constant vigilance allows them to spot unusual activity and potential security breaches in real-time. With the help of sophisticated tools and techniques, a cybersecurity analyst analyzes these threats to identify their source, method, and impact.
Incident Response:
Cybersecurity analysts play a critical role in the response to security incidents, such as data breaches and malware attacks. They investigate the root causes of the incident, determine the extent of the harm, and formulate a strategic response strategy to reduce the immediate impact of the incident and prevent similar occurrences in the future.
Vulnerability Assessment:
Another important part of their work is proactive vulnerability assessment (PVAM). Cybersecurity analysts carry out regular vulnerability assessments on an organization’s systems and applications, looking for vulnerabilities that attackers can take advantage of. They then work with teams to fix those vulnerabilities before they become a target of an attack.
Security Awareness Training:
One of the most important roles of a cybersecurity analyst is to educate employees and other stakeholders on the risks of cybersecurity and the best ways to protect against them. This is done in a proactive way, helping to build a security culture within the company and reducing the chances of human mistakes that could lead to breach.
Security Policy Development and Enforcement
Cybersecurity analysts work on the development, implementation, and enforcement of security policies, processes, and best practices across an organization. They can also help ensure compliance with industry rules and regulations, such as GDPR or HIPAA.
Skill Required to be a Cybersecurity Analyst
As a cybersecurity analyst one is tasked with defending critical data, networks, and systems against cyber attacks. To excel in this vital role, individuals must cultivate a diverse skill set that spans technical expertise, analytical acumen, and a profound understanding of the evolving cybersecurity landscape. Some of the crucial skills include:
Technical Proficiency, understanding various operating systems, network protocols, and security technologies is fundamental. Proficiency in areas such as firewall management, intrusion detection, and encryption is crucial.
Threat Intelligence, staying informed about the latest threats and trends in the cybersecurity environment is important for defense. Analysts need to keep up-to-date with emerging threats and hacker techniques.
Analytical Skills, the ability to analyze large volumes of data, identify patterns, and make informed decisions is important for threat detection and incident response. Analysts use data analysis tools and techniques to uncover hidden threats.
Programming and Scripting, knowledge of programming languages like Python and scripting skills are valuable for automating routine tasks, conducting security assessments, and customizing security solutions.
Risk Assessment, evaluating risks and prioritizing security measures based on potential impact is essential. Cybersecurity analysts must understand the organization’s business objective and align security efforts accordingly.
Communication Skills, effective communication is critical for reporting security incidents, collaborating with other departments, and conveying complex technical concepts to non-technical stakeholders.
Job Opportunities
Cybersecurity analysts are in high demand as companies and organizations are realizing the importance of keeping the digital information safe and secure.
Industries that require Cybersecurity Analysts ?
Cybersecurity plays a central role in almost every industry and sector, which mens cybersecurity analyst jobs are plentiful and varied. Here are a few key industries that are always looking for cybersecurity talent:
- Finance and Banking: Financial institutions manage vast amounts of sensitive data, making them prime targets for cyberattacks. They require skilled analysts to safeguard customer financial information and maintain the integrity of their systems.
- Healthcare: Healthcare organizations maintain lots of digital records and patient records that are to be protected so as to protect patient privacy and ensure the security of medical records.
- Government and Defense: Government agencies and defense organizations require cybersecurity experts to protect national security interests, government data, and critical infrastructure.
- Retail and e-commerce: Online retailers handle vast amounts of customer data and payment information, making them targets for cyberattacks. They, thus, need cybersecurity analysts to safeguard customer information and maintain trust.
- Technology Companies: Tech firms, including software developers, hardware manufacturers, and cloud service providers, need cybersecurity professionals to protect their products and services from security breaches.
Job Titles for Cybersecurity Analysts
Cybersecurity analysts may go by various job titles, depending on the organization and specific responsibilities. Here are some common job titles associated with this role:
- Security Analyst, a general title for cybersecurity professionals who monitor security systems, investigate incidents and implement security measures.
- Threat Analyst, specialize in identifying and assessing cybersecurity threats, vulnerabilities and risks.
- Incident Responder, incident responders are experts in handling security incidents, mitigating damage, and implementing measures to prevent future occurrences.
- Network Security Analyst, specifically focused on securing an organization’s network infrastructure.
- Compliance Analyst, ensure that an organization adheres to cybersecurity regulations, standards, and best practices.
Career Growth and Advancement
Cybersecurity analysts’ careers start with entry-level roles and work their way up to more specialized, senior roles.
After learning how to detect threats, respond to incidents, and assess vulnerabilities, analysts can go on to work as a ‘security architect’, designing and implementing complex security plans. Another way to get into a senior role is as a ‘Penetration tester or ethical hacker’, tasked with proactively finding vulnerabilities by simulating attacks. If you want to move up to a leadership role, you can aim for a Security Manager or Director role, where you oversee security teams and strategies.
The high point of a cybersecurity career usually comes when you become a CISO (Chief Information Security Officer), which means you're in charge of an organization's whole cybersecurity program and report directly to top execs. This career path not only provides you with professional growth, but also more responsibility and higher pay as cybersecurity is still a top priority for organizations around the world.
Global Demand for Cybersecurity Professionals
Cybersecurity jobs are in high demand all over the world because of the ever-growing digital landscape and the constant threat of cyber attacks.
Plus, digitalization and cyberattacks are getting more and more sophisticated, which means there's a huge gap between what's available and what's needed. So, as long as digitalization keeps growing and cyber threats keep getting worse, there's plenty of job opportunities and great pay for cybersecurity.
Compensation,
How much money does a cybersecurity analyst make?
It depends on a lot of things, like experience, where you work, and the company you work for. But on average, the average salary for a cybersecurity analyst in the US is between $60,000 and $120,000 a year. If you're a senior analyst with a lot of experience and knowledge, you could make even more. Many companies offer health insurance, retirement benefits, professional development programs, and bonuses to cybersecurity professionals.
In conclusion, the role of cybersecurity analysts is to play a critical role in the defense of organizations and individuals against cyber threats, such as data breaches and cyberattacks. Cybersecurity analysts possess a wide range of competencies, are committed to continuous learning, and are committed to upholding the highest security standards.
As the digital world continues to evolve, cybersecurity analysts have the opportunity to pursue a career that offers both financial security and intellectual stimulation.
Sample Articles:
Windows Server Backup for Hyper-V Environments: Methods to Enhance Data Protection
Top 5 Common Cybersecurity Attacks MSPs Should Know in 2024
Proven Methods for Efficient Virtual Machine Backups in Hyper-V
The Best Enterprise Backup Solutions for 2023
Virtual Backup Mastery: Techniques for the Modern IT Environment
The Ultimate Guide to NAS Backup: 5 Strategies with Their Upsides and Downsides
Mastering the Transition: Navigating from Hyper-V to VMware Virtualization
This is the new blog post : How to Enable Virtualization in Windows 11? Easy Guide
6 Ways To Protect Critical Digital Assets
A Comprehensive Guide on How to Backup Data from NAS
How To Pick Sales Intelligence Software In 2023
11 Best Open Source Hypervisor Technologies
2023's Cyber Security Forecast: The Latest Threats and Trends to Watch
11 Must-Have Tools For Small Businesses To Streamline Operations
Trending Discussions on Cybersecurity
Golang reads html tags (<>) from JSON string data as &lt and &gt which causes rendering issues in the browser
Python / BeautifulSoup return ids with indeed jobs
Specific argument causes argparse to parse arguments incorrectly
How do I adjust my tibble to get a grouped bar chart in ggplot2?
how to make a model fit the dataset in Keras?
How to change my css to make hyper link visible [ with minimum sample code ]?
Bootstrap overflow width when writting an article with many paragraphs
Faster way than nested for loops for custom conditions on multiple columns in two DataFrames
Find a hash function to malfunction insertion sort
component wont render when is useEffect() is ran once
QUESTION
Golang reads html tags (<>) from JSON string data as &lt and &gt which causes rendering issues in the browser
Asked 2022-Mar-19 at 18:45I have a basic web server that renders blog posts from a database of JSON posts wherein the main paragraphs are built from a JSON string array. I was trying to find a way to easily encode new lines or line breaks and found a lot of difficulty with how the encoding for these values changes from JSON to GoLang and finally to my HTML webpage. When I tried to encode my JSON with newlines I found I had to encode them using \\n
rather than just \n
in order for them to actually appear on my page. One problem however was they simply appeared as text and not line breaks.
I then tried to research ways to replace the \n
portions of the joined string array into <br>
tags, however I could not find any way to do this with go and moved to trying to do so in javascript. This did not work either despite me deferring the calling of my javascript in my link from my HTML. this is that javascript:
1var title = window.document.getElementById("title");
2var timestamp = window.document.getElementById("timestamp");
3var sitemap = window.document.getElementById("sitemap");
4var main = window.document.getElementById("main");
5var contact_form = window.document.getElementById("contact-form");
6var content_info = window.document.getElementById("content-info");
7
8var str = main.innerHTML;
9
10function replaceNewlines() {
11 // Replace the \n with <br>
12 str = str.replace(/(?:\r\n|\r|\n)/g, "<br>");
13
14 // Update the value of paragraph
15 main.innerHTML = str;
16}
17
Here is my HTML:
1var title = window.document.getElementById("title");
2var timestamp = window.document.getElementById("timestamp");
3var sitemap = window.document.getElementById("sitemap");
4var main = window.document.getElementById("main");
5var contact_form = window.document.getElementById("contact-form");
6var content_info = window.document.getElementById("content-info");
7
8var str = main.innerHTML;
9
10function replaceNewlines() {
11 // Replace the \n with <br>
12 str = str.replace(/(?:\r\n|\r|\n)/g, "<br>");
13
14 // Update the value of paragraph
15 main.innerHTML = str;
16}
17<!DOCTYPE html>
18<html lang="en">
19<head>
20 <meta charset="UTF-8">
21 <meta http-equiv="X-UA-Compatible" content="IE=edge">
22 <meta name="viewport" content="width=device-width, initial-scale=1.0">
23 <title>Dynamic JSON Events</title>
24 <link rel="stylesheet" href="/blogtemplate.css"></style>
25</head>
26<body>
27 <section id="title">
28 <h1 id="text-title">{{.Title}}</h1>
29 <time id="timestamp">
30 {{.Timestamp}}
31 </time>
32 </section>
33 <nav role="navigation" id="site-nav">
34 <ul id="sitemap">
35 </ul>
36 </nav>
37 <main role="main" id="main">
38 {{.ParsedMain}}
39 </main>
40 <footer role="contentinfo" id="footer">
41 <form id="contact-form" role="form">
42 <address>
43 Contact me by <a id="my-email" href="mailto:antonhibl11@gmail.com" class="my-email">e-mail</a>
44 </address>
45 </form>
46 </footer>
47<script defer src="/blogtemplate.js">
48</script>
49</body>
50</html>
51
I then finally turned to trying to hardcode <br>
tags into my json data to discover that this simply renders as < and > when it finally reaches the browser. I am getting pretty frustrated with this process of encoding constantly causing me issues in creating newlines and line breaks. How can I easily include newlines where I want in my JSON string data?
Here is my Go script if it helps:
1var title = window.document.getElementById("title");
2var timestamp = window.document.getElementById("timestamp");
3var sitemap = window.document.getElementById("sitemap");
4var main = window.document.getElementById("main");
5var contact_form = window.document.getElementById("contact-form");
6var content_info = window.document.getElementById("content-info");
7
8var str = main.innerHTML;
9
10function replaceNewlines() {
11 // Replace the \n with <br>
12 str = str.replace(/(?:\r\n|\r|\n)/g, "<br>");
13
14 // Update the value of paragraph
15 main.innerHTML = str;
16}
17<!DOCTYPE html>
18<html lang="en">
19<head>
20 <meta charset="UTF-8">
21 <meta http-equiv="X-UA-Compatible" content="IE=edge">
22 <meta name="viewport" content="width=device-width, initial-scale=1.0">
23 <title>Dynamic JSON Events</title>
24 <link rel="stylesheet" href="/blogtemplate.css"></style>
25</head>
26<body>
27 <section id="title">
28 <h1 id="text-title">{{.Title}}</h1>
29 <time id="timestamp">
30 {{.Timestamp}}
31 </time>
32 </section>
33 <nav role="navigation" id="site-nav">
34 <ul id="sitemap">
35 </ul>
36 </nav>
37 <main role="main" id="main">
38 {{.ParsedMain}}
39 </main>
40 <footer role="contentinfo" id="footer">
41 <form id="contact-form" role="form">
42 <address>
43 Contact me by <a id="my-email" href="mailto:antonhibl11@gmail.com" class="my-email">e-mail</a>
44 </address>
45 </form>
46 </footer>
47<script defer src="/blogtemplate.js">
48</script>
49</body>
50</html>
51package main
52
53import (
54 "encoding/json"
55 "html/template"
56 "log"
57 "net/http"
58 "os"
59 "regexp"
60 "strings"
61)
62
63type BlogPost struct {
64 Title string `json:"title"`
65 Timestamp string `json:"timestamp"`
66 Main []string `json:"main"`
67 ParsedMain string
68}
69
70// this did not seem to work when I tried to implement it below
71var re = regexp.MustCompile(`\r\n|[\r\n\v\f\x{0085}\x{2028}\x{2029}]`)
72func replaceRegexp(s string) string {
73 return re.ReplaceAllString(s, "<br>\n")
74}
75
76var blogTemplate = template.Must(template.ParseFiles("./assets/docs/blogtemplate.html"))
77
78func blogHandler(w http.ResponseWriter, r *http.Request) {
79 blogstr := r.URL.Path[len("/blog/"):] + ".json"
80
81 f, err := os.Open("db/" + blogstr)
82 if err != nil {
83 http.Error(w, err.Error(), http.StatusNotFound)
84 return
85 }
86 defer f.Close()
87
88 var post BlogPost
89 if err := json.NewDecoder(f).Decode(&post); err != nil {
90 http.Error(w, err.Error(), http.StatusInternalServerError)
91 return
92 }
93
94 post.ParsedMain = strings.Join(post.Main, "")
95
96 // post.ParsedMain = replaceRegexp(post.ParsedMain)
97
98 if err := blogTemplate.Execute(w, post); err != nil {
99 log.Println(err)
100 }
101}
102
103func teapotHandler(w http.ResponseWriter, r *http.Request) {
104 w.WriteHeader(http.StatusTeapot)
105 w.Write([]byte("<html><h1><a href='https://datatracker.ietf.org/doc/html/rfc2324/'>HTCPTP</h1><img src='https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Ftaooftea.com%2Fwp-content%2Fuploads%2F2015%2F12%2Fyixing-dark-brown-small.jpg&f=1&nofb=1' alt='Im a teapot'><html>"))
106}
107
108func faviconHandler(w http.ResponseWriter, r *http.Request) {
109 http.ServeFile(w, r, "./assets/art/favicon.ico")
110}
111
112func main() {
113 http.Handle("/", http.FileServer(http.Dir("/assets/docs")))
114 http.HandleFunc("/blog/", blogHandler)
115 http.HandleFunc("/favicon.ico", faviconHandler)
116 http.HandleFunc("/teapot", teapotHandler)
117 log.Fatal(http.ListenAndServe(":8080", nil))
118}
119
120
Here is an example of my JSON data:
1var title = window.document.getElementById("title");
2var timestamp = window.document.getElementById("timestamp");
3var sitemap = window.document.getElementById("sitemap");
4var main = window.document.getElementById("main");
5var contact_form = window.document.getElementById("contact-form");
6var content_info = window.document.getElementById("content-info");
7
8var str = main.innerHTML;
9
10function replaceNewlines() {
11 // Replace the \n with <br>
12 str = str.replace(/(?:\r\n|\r|\n)/g, "<br>");
13
14 // Update the value of paragraph
15 main.innerHTML = str;
16}
17<!DOCTYPE html>
18<html lang="en">
19<head>
20 <meta charset="UTF-8">
21 <meta http-equiv="X-UA-Compatible" content="IE=edge">
22 <meta name="viewport" content="width=device-width, initial-scale=1.0">
23 <title>Dynamic JSON Events</title>
24 <link rel="stylesheet" href="/blogtemplate.css"></style>
25</head>
26<body>
27 <section id="title">
28 <h1 id="text-title">{{.Title}}</h1>
29 <time id="timestamp">
30 {{.Timestamp}}
31 </time>
32 </section>
33 <nav role="navigation" id="site-nav">
34 <ul id="sitemap">
35 </ul>
36 </nav>
37 <main role="main" id="main">
38 {{.ParsedMain}}
39 </main>
40 <footer role="contentinfo" id="footer">
41 <form id="contact-form" role="form">
42 <address>
43 Contact me by <a id="my-email" href="mailto:antonhibl11@gmail.com" class="my-email">e-mail</a>
44 </address>
45 </form>
46 </footer>
47<script defer src="/blogtemplate.js">
48</script>
49</body>
50</html>
51package main
52
53import (
54 "encoding/json"
55 "html/template"
56 "log"
57 "net/http"
58 "os"
59 "regexp"
60 "strings"
61)
62
63type BlogPost struct {
64 Title string `json:"title"`
65 Timestamp string `json:"timestamp"`
66 Main []string `json:"main"`
67 ParsedMain string
68}
69
70// this did not seem to work when I tried to implement it below
71var re = regexp.MustCompile(`\r\n|[\r\n\v\f\x{0085}\x{2028}\x{2029}]`)
72func replaceRegexp(s string) string {
73 return re.ReplaceAllString(s, "<br>\n")
74}
75
76var blogTemplate = template.Must(template.ParseFiles("./assets/docs/blogtemplate.html"))
77
78func blogHandler(w http.ResponseWriter, r *http.Request) {
79 blogstr := r.URL.Path[len("/blog/"):] + ".json"
80
81 f, err := os.Open("db/" + blogstr)
82 if err != nil {
83 http.Error(w, err.Error(), http.StatusNotFound)
84 return
85 }
86 defer f.Close()
87
88 var post BlogPost
89 if err := json.NewDecoder(f).Decode(&post); err != nil {
90 http.Error(w, err.Error(), http.StatusInternalServerError)
91 return
92 }
93
94 post.ParsedMain = strings.Join(post.Main, "")
95
96 // post.ParsedMain = replaceRegexp(post.ParsedMain)
97
98 if err := blogTemplate.Execute(w, post); err != nil {
99 log.Println(err)
100 }
101}
102
103func teapotHandler(w http.ResponseWriter, r *http.Request) {
104 w.WriteHeader(http.StatusTeapot)
105 w.Write([]byte("<html><h1><a href='https://datatracker.ietf.org/doc/html/rfc2324/'>HTCPTP</h1><img src='https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Ftaooftea.com%2Fwp-content%2Fuploads%2F2015%2F12%2Fyixing-dark-brown-small.jpg&f=1&nofb=1' alt='Im a teapot'><html>"))
106}
107
108func faviconHandler(w http.ResponseWriter, r *http.Request) {
109 http.ServeFile(w, r, "./assets/art/favicon.ico")
110}
111
112func main() {
113 http.Handle("/", http.FileServer(http.Dir("/assets/docs")))
114 http.HandleFunc("/blog/", blogHandler)
115 http.HandleFunc("/favicon.ico", faviconHandler)
116 http.HandleFunc("/teapot", teapotHandler)
117 log.Fatal(http.ListenAndServe(":8080", nil))
118}
119
120{
121 "title" : "Finished My First Blog App",
122 "timestamp": "Friday, March 18th, 11:39 AM",
123 "main": [
124 "It took me awhile to tidy everything up but I finally finished creating my first ",
125 "blog app using Go along with JSON for my database. I plan on using this to document ",
126 "my own thoughts and experiences as a programmer and cybersecurity researcher; things ",
127 "like tutorials, thought-pieces, and journals on my own projects progress will be ",
128 "posted here. I look forward to getting more used to writing and sharing my own story, ",
129 "I think it will help me learn from doing and also hearing feedback from others.\\n\\n",
130 "I utilized a handler function to dynamically read from my JSON database and template ",
131 "data into my HTML template using the go html/template package as well as the encoding/json ",
132 "to handling reading those objects. Next I had to make sure my CSS and JavaScript assets ",
133 "would be served alongside this finished template in order for my styling to be output into ",
134 "the browser. For this I used a FileServer function which allowed for me to serve linked ",
135 "resources in my HTML boilerplate and have the server still locate blog resources dynamically. ",
136 "Going forward I am looking to add better styling, more JavaScript elements to the page, and ",
137 "more functionality to how my JSON data is encoded and parsed in order to create more complex ",
138 "looking pages and blog posts."
139 ]
140}
141
I am just trying to find a way to easily include spaces between paragraphs in the long array of strings in my JSON however I have failed in Go, my JS doesn't ever seem to affect my webpage(this is not the only problem I have had with this, it does not seem to want to affect any page elements for some reason), and I cannot seem to hardcode <br>
tags directly into my JSON as the browser interprets those as <br><br>
. Nothing I have tried has actually let me encode linebreaks, What can I do here?
ANSWER
Answered 2022-Mar-19 at 06:43You could try to loop over your array inside the template and generate a p tag for every element of the array. This way there is no need to edit your main array in go.
Template:
1var title = window.document.getElementById("title");
2var timestamp = window.document.getElementById("timestamp");
3var sitemap = window.document.getElementById("sitemap");
4var main = window.document.getElementById("main");
5var contact_form = window.document.getElementById("contact-form");
6var content_info = window.document.getElementById("content-info");
7
8var str = main.innerHTML;
9
10function replaceNewlines() {
11 // Replace the \n with <br>
12 str = str.replace(/(?:\r\n|\r|\n)/g, "<br>");
13
14 // Update the value of paragraph
15 main.innerHTML = str;
16}
17<!DOCTYPE html>
18<html lang="en">
19<head>
20 <meta charset="UTF-8">
21 <meta http-equiv="X-UA-Compatible" content="IE=edge">
22 <meta name="viewport" content="width=device-width, initial-scale=1.0">
23 <title>Dynamic JSON Events</title>
24 <link rel="stylesheet" href="/blogtemplate.css"></style>
25</head>
26<body>
27 <section id="title">
28 <h1 id="text-title">{{.Title}}</h1>
29 <time id="timestamp">
30 {{.Timestamp}}
31 </time>
32 </section>
33 <nav role="navigation" id="site-nav">
34 <ul id="sitemap">
35 </ul>
36 </nav>
37 <main role="main" id="main">
38 {{.ParsedMain}}
39 </main>
40 <footer role="contentinfo" id="footer">
41 <form id="contact-form" role="form">
42 <address>
43 Contact me by <a id="my-email" href="mailto:antonhibl11@gmail.com" class="my-email">e-mail</a>
44 </address>
45 </form>
46 </footer>
47<script defer src="/blogtemplate.js">
48</script>
49</body>
50</html>
51package main
52
53import (
54 "encoding/json"
55 "html/template"
56 "log"
57 "net/http"
58 "os"
59 "regexp"
60 "strings"
61)
62
63type BlogPost struct {
64 Title string `json:"title"`
65 Timestamp string `json:"timestamp"`
66 Main []string `json:"main"`
67 ParsedMain string
68}
69
70// this did not seem to work when I tried to implement it below
71var re = regexp.MustCompile(`\r\n|[\r\n\v\f\x{0085}\x{2028}\x{2029}]`)
72func replaceRegexp(s string) string {
73 return re.ReplaceAllString(s, "<br>\n")
74}
75
76var blogTemplate = template.Must(template.ParseFiles("./assets/docs/blogtemplate.html"))
77
78func blogHandler(w http.ResponseWriter, r *http.Request) {
79 blogstr := r.URL.Path[len("/blog/"):] + ".json"
80
81 f, err := os.Open("db/" + blogstr)
82 if err != nil {
83 http.Error(w, err.Error(), http.StatusNotFound)
84 return
85 }
86 defer f.Close()
87
88 var post BlogPost
89 if err := json.NewDecoder(f).Decode(&post); err != nil {
90 http.Error(w, err.Error(), http.StatusInternalServerError)
91 return
92 }
93
94 post.ParsedMain = strings.Join(post.Main, "")
95
96 // post.ParsedMain = replaceRegexp(post.ParsedMain)
97
98 if err := blogTemplate.Execute(w, post); err != nil {
99 log.Println(err)
100 }
101}
102
103func teapotHandler(w http.ResponseWriter, r *http.Request) {
104 w.WriteHeader(http.StatusTeapot)
105 w.Write([]byte("<html><h1><a href='https://datatracker.ietf.org/doc/html/rfc2324/'>HTCPTP</h1><img src='https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Ftaooftea.com%2Fwp-content%2Fuploads%2F2015%2F12%2Fyixing-dark-brown-small.jpg&f=1&nofb=1' alt='Im a teapot'><html>"))
106}
107
108func faviconHandler(w http.ResponseWriter, r *http.Request) {
109 http.ServeFile(w, r, "./assets/art/favicon.ico")
110}
111
112func main() {
113 http.Handle("/", http.FileServer(http.Dir("/assets/docs")))
114 http.HandleFunc("/blog/", blogHandler)
115 http.HandleFunc("/favicon.ico", faviconHandler)
116 http.HandleFunc("/teapot", teapotHandler)
117 log.Fatal(http.ListenAndServe(":8080", nil))
118}
119
120{
121 "title" : "Finished My First Blog App",
122 "timestamp": "Friday, March 18th, 11:39 AM",
123 "main": [
124 "It took me awhile to tidy everything up but I finally finished creating my first ",
125 "blog app using Go along with JSON for my database. I plan on using this to document ",
126 "my own thoughts and experiences as a programmer and cybersecurity researcher; things ",
127 "like tutorials, thought-pieces, and journals on my own projects progress will be ",
128 "posted here. I look forward to getting more used to writing and sharing my own story, ",
129 "I think it will help me learn from doing and also hearing feedback from others.\\n\\n",
130 "I utilized a handler function to dynamically read from my JSON database and template ",
131 "data into my HTML template using the go html/template package as well as the encoding/json ",
132 "to handling reading those objects. Next I had to make sure my CSS and JavaScript assets ",
133 "would be served alongside this finished template in order for my styling to be output into ",
134 "the browser. For this I used a FileServer function which allowed for me to serve linked ",
135 "resources in my HTML boilerplate and have the server still locate blog resources dynamically. ",
136 "Going forward I am looking to add better styling, more JavaScript elements to the page, and ",
137 "more functionality to how my JSON data is encoded and parsed in order to create more complex ",
138 "looking pages and blog posts."
139 ]
140}
141<!DOCTYPE html>
142<html lang="en">
143<head>
144 <meta charset="UTF-8">
145 <meta http-equiv="X-UA-Compatible" content="IE=edge">
146 <meta name="viewport" content="width=device-width, initial-scale=1.0">
147 <title>Dynamic JSON Events</title>
148 <link rel="stylesheet" href="/blogtemplate.css"></style>
149</head>
150<body>
151 <section id="title">
152 <h1 id="text-title">{{.Title}}</h1>
153 <time id="timestamp">
154 {{.Timestamp}}
155 </time>
156 </section>
157 <nav role="navigation" id="site-nav">
158 <ul id="sitemap">
159 </ul>
160 </nav>
161 <main role="main" id="main">
162 {{range $element := .Main}} <p>{{$element}}</p> {{end}}
163 </main>
164 <footer role="contentinfo" id="footer">
165 <form id="contact-form" role="form">
166 <address>
167 Contact me by <a id="my-email" href="mailto:antonhibl11@gmail.com" class="my-email">e-mail</a>
168 </address>
169 </form>
170 </footer>
171<script defer src="/blogtemplate.js">
172</script>
173</body>
174</html>
175
QUESTION
Python / BeautifulSoup return ids with indeed jobs
Asked 2022-Feb-19 at 20:51I have a basic indeed web scraper set up using BeautifulSoup that I am able to return the job title and company of each job from the first page of the indeed job search url I am using:
1def extract():
2 headers = headers
3 url = f'https://www.indeed.com/jobs?q=Network%20Architect&start=&vjk=e8bcf3fbe7498a5f'
4 r = requests.get(url,headers)
5 #return r.status_code
6 soup = BeautifulSoup(r.content, 'html.parser')
7 return soup
8
9def transform(soup):
10 for job in soup.select('.result'):
11 title = job.select_one('.jobTitle').get_text(' ')
12 company = job.find(class_='companyName').text
13 print(f'title: {title}')
14 print(f'company: {company}')
15
16
17
18c = extract()
19transform(c)
20
Output
1def extract():
2 headers = headers
3 url = f'https://www.indeed.com/jobs?q=Network%20Architect&start=&vjk=e8bcf3fbe7498a5f'
4 r = requests.get(url,headers)
5 #return r.status_code
6 soup = BeautifulSoup(r.content, 'html.parser')
7 return soup
8
9def transform(soup):
10 for job in soup.select('.result'):
11 title = job.select_one('.jobTitle').get_text(' ')
12 company = job.find(class_='companyName').text
13 print(f'title: {title}')
14 print(f'company: {company}')
15
16
17
18c = extract()
19transform(c)
20title: new Network Architect
21company: MetroSys
22title: new Network Architect
23company: Federal Working Group
24title: new REMOTE Network Architect - CCIE
25company: CyberCoders
26title: new Network Architect SME
27company: Emergere Technologies
28title: Cybersecurity Apprentice
29company: IBM
30title: Network Engineer (NEW YORK) ONSITE ONLY NEED TO APPLY
31company: QnA Tech
32title: new Network Architect
33company: EdgeCo Holdings
34title: new Network Architect
35company: JKL Technologies, Inc.
36title: Network Architect
37company: OTELCO
38title: new Network Architect
39company: Illinois Municipal Retirement Fund (IMRF)
40title: new Network Architect, Google Enterprise Network
41company: Google
42title: new Network Infrastructure Lead Or Architect- Menlo Park CA -Ful...
43company: Xforia Technologies
44title: Network Architect
45company: Fairfax County Public Schools
46title: new Network Engineer
47company: Labatt Food Service
48title: new Network Architect (5056-3)
49company: JND
50
Now on indeed it appears they have a unique ID for each job, I am trying to access this ID WITH each job so that I can use it later in an SQL database so that I don't add duplicate jobs. I am able the access the job IDs with the following code:
1def extract():
2 headers = headers
3 url = f'https://www.indeed.com/jobs?q=Network%20Architect&start=&vjk=e8bcf3fbe7498a5f'
4 r = requests.get(url,headers)
5 #return r.status_code
6 soup = BeautifulSoup(r.content, 'html.parser')
7 return soup
8
9def transform(soup):
10 for job in soup.select('.result'):
11 title = job.select_one('.jobTitle').get_text(' ')
12 company = job.find(class_='companyName').text
13 print(f'title: {title}')
14 print(f'company: {company}')
15
16
17
18c = extract()
19transform(c)
20title: new Network Architect
21company: MetroSys
22title: new Network Architect
23company: Federal Working Group
24title: new REMOTE Network Architect - CCIE
25company: CyberCoders
26title: new Network Architect SME
27company: Emergere Technologies
28title: Cybersecurity Apprentice
29company: IBM
30title: Network Engineer (NEW YORK) ONSITE ONLY NEED TO APPLY
31company: QnA Tech
32title: new Network Architect
33company: EdgeCo Holdings
34title: new Network Architect
35company: JKL Technologies, Inc.
36title: Network Architect
37company: OTELCO
38title: new Network Architect
39company: Illinois Municipal Retirement Fund (IMRF)
40title: new Network Architect, Google Enterprise Network
41company: Google
42title: new Network Infrastructure Lead Or Architect- Menlo Park CA -Ful...
43company: Xforia Technologies
44title: Network Architect
45company: Fairfax County Public Schools
46title: new Network Engineer
47company: Labatt Food Service
48title: new Network Architect (5056-3)
49company: JND
50for tag in soup.find_all('a', class_ = 'result') :
51 print(tag.get('id'))
52
Output:
1def extract():
2 headers = headers
3 url = f'https://www.indeed.com/jobs?q=Network%20Architect&start=&vjk=e8bcf3fbe7498a5f'
4 r = requests.get(url,headers)
5 #return r.status_code
6 soup = BeautifulSoup(r.content, 'html.parser')
7 return soup
8
9def transform(soup):
10 for job in soup.select('.result'):
11 title = job.select_one('.jobTitle').get_text(' ')
12 company = job.find(class_='companyName').text
13 print(f'title: {title}')
14 print(f'company: {company}')
15
16
17
18c = extract()
19transform(c)
20title: new Network Architect
21company: MetroSys
22title: new Network Architect
23company: Federal Working Group
24title: new REMOTE Network Architect - CCIE
25company: CyberCoders
26title: new Network Architect SME
27company: Emergere Technologies
28title: Cybersecurity Apprentice
29company: IBM
30title: Network Engineer (NEW YORK) ONSITE ONLY NEED TO APPLY
31company: QnA Tech
32title: new Network Architect
33company: EdgeCo Holdings
34title: new Network Architect
35company: JKL Technologies, Inc.
36title: Network Architect
37company: OTELCO
38title: new Network Architect
39company: Illinois Municipal Retirement Fund (IMRF)
40title: new Network Architect, Google Enterprise Network
41company: Google
42title: new Network Infrastructure Lead Or Architect- Menlo Park CA -Ful...
43company: Xforia Technologies
44title: Network Architect
45company: Fairfax County Public Schools
46title: new Network Engineer
47company: Labatt Food Service
48title: new Network Architect (5056-3)
49company: JND
50for tag in soup.find_all('a', class_ = 'result') :
51 print(tag.get('id'))
52job_a678f3bfc20cb753
53job_eef3e4c10d979c1e
54job_faedfdbadab2f19b
55job_190a6b55b99c78f0
56job_32d20498e8fbf692
57job_aeaabb9af50f36d6
58job_92432325a24212d0
59job_819ce9d7ec6e5890
60job_d979bf7daac01528
61job_0879369d166a9b94
62job_2d377bc2e5085ad7
63job_bb8e5d0f651c072f
64job_dcff58df466f1ecb
65job_f70d55871eb1df3f
66sj_54a09e5e34e08948
67
When I try to implement this with my working code I can access the IDs however, they all get returned together instead of one at a time with the corresponding job, or 1 with each job posting (instead of 15 total getting 15x15) I have tried this way:
1def extract():
2 headers = headers
3 url = f'https://www.indeed.com/jobs?q=Network%20Architect&start=&vjk=e8bcf3fbe7498a5f'
4 r = requests.get(url,headers)
5 #return r.status_code
6 soup = BeautifulSoup(r.content, 'html.parser')
7 return soup
8
9def transform(soup):
10 for job in soup.select('.result'):
11 title = job.select_one('.jobTitle').get_text(' ')
12 company = job.find(class_='companyName').text
13 print(f'title: {title}')
14 print(f'company: {company}')
15
16
17
18c = extract()
19transform(c)
20title: new Network Architect
21company: MetroSys
22title: new Network Architect
23company: Federal Working Group
24title: new REMOTE Network Architect - CCIE
25company: CyberCoders
26title: new Network Architect SME
27company: Emergere Technologies
28title: Cybersecurity Apprentice
29company: IBM
30title: Network Engineer (NEW YORK) ONSITE ONLY NEED TO APPLY
31company: QnA Tech
32title: new Network Architect
33company: EdgeCo Holdings
34title: new Network Architect
35company: JKL Technologies, Inc.
36title: Network Architect
37company: OTELCO
38title: new Network Architect
39company: Illinois Municipal Retirement Fund (IMRF)
40title: new Network Architect, Google Enterprise Network
41company: Google
42title: new Network Infrastructure Lead Or Architect- Menlo Park CA -Ful...
43company: Xforia Technologies
44title: Network Architect
45company: Fairfax County Public Schools
46title: new Network Engineer
47company: Labatt Food Service
48title: new Network Architect (5056-3)
49company: JND
50for tag in soup.find_all('a', class_ = 'result') :
51 print(tag.get('id'))
52job_a678f3bfc20cb753
53job_eef3e4c10d979c1e
54job_faedfdbadab2f19b
55job_190a6b55b99c78f0
56job_32d20498e8fbf692
57job_aeaabb9af50f36d6
58job_92432325a24212d0
59job_819ce9d7ec6e5890
60job_d979bf7daac01528
61job_0879369d166a9b94
62job_2d377bc2e5085ad7
63job_bb8e5d0f651c072f
64job_dcff58df466f1ecb
65job_f70d55871eb1df3f
66sj_54a09e5e34e08948
67def transform(soup):
68 for job in soup.select('.result'):
69 title = job.select_one('.jobTitle').get_text(' ')
70 company = job.find(class_='companyName').text
71 tag = soup.find_all('a', class_='result')
72 for x in tag:
73 print(x.get('id'))
74 print(f'title: {title}')
75 print(f'company: {company}')
76
77
And this way:
1def extract():
2 headers = headers
3 url = f'https://www.indeed.com/jobs?q=Network%20Architect&start=&vjk=e8bcf3fbe7498a5f'
4 r = requests.get(url,headers)
5 #return r.status_code
6 soup = BeautifulSoup(r.content, 'html.parser')
7 return soup
8
9def transform(soup):
10 for job in soup.select('.result'):
11 title = job.select_one('.jobTitle').get_text(' ')
12 company = job.find(class_='companyName').text
13 print(f'title: {title}')
14 print(f'company: {company}')
15
16
17
18c = extract()
19transform(c)
20title: new Network Architect
21company: MetroSys
22title: new Network Architect
23company: Federal Working Group
24title: new REMOTE Network Architect - CCIE
25company: CyberCoders
26title: new Network Architect SME
27company: Emergere Technologies
28title: Cybersecurity Apprentice
29company: IBM
30title: Network Engineer (NEW YORK) ONSITE ONLY NEED TO APPLY
31company: QnA Tech
32title: new Network Architect
33company: EdgeCo Holdings
34title: new Network Architect
35company: JKL Technologies, Inc.
36title: Network Architect
37company: OTELCO
38title: new Network Architect
39company: Illinois Municipal Retirement Fund (IMRF)
40title: new Network Architect, Google Enterprise Network
41company: Google
42title: new Network Infrastructure Lead Or Architect- Menlo Park CA -Ful...
43company: Xforia Technologies
44title: Network Architect
45company: Fairfax County Public Schools
46title: new Network Engineer
47company: Labatt Food Service
48title: new Network Architect (5056-3)
49company: JND
50for tag in soup.find_all('a', class_ = 'result') :
51 print(tag.get('id'))
52job_a678f3bfc20cb753
53job_eef3e4c10d979c1e
54job_faedfdbadab2f19b
55job_190a6b55b99c78f0
56job_32d20498e8fbf692
57job_aeaabb9af50f36d6
58job_92432325a24212d0
59job_819ce9d7ec6e5890
60job_d979bf7daac01528
61job_0879369d166a9b94
62job_2d377bc2e5085ad7
63job_bb8e5d0f651c072f
64job_dcff58df466f1ecb
65job_f70d55871eb1df3f
66sj_54a09e5e34e08948
67def transform(soup):
68 for job in soup.select('.result'):
69 title = job.select_one('.jobTitle').get_text(' ')
70 company = job.find(class_='companyName').text
71 tag = soup.find_all('a', class_='result')
72 for x in tag:
73 print(x.get('id'))
74 print(f'title: {title}')
75 print(f'company: {company}')
76
77def transform(soup):
78 for job in soup.select('.result'):
79 title = job.select_one('.jobTitle').get_text(' ')
80 company = job.find(class_='companyName').text
81 tag = soup.find_all('a', class_='result')
82 for x in tag:
83 print(x.get('id'))
84 print(f'title: {title}')
85 print(f'company: {company}')
86
87
The second way is the closest to my result however instead of getting 1 title, 1 company, and 1 id, adding up to 15 total jobs postings, I get the id returned with each job posting so 15x15.
The desired result is just to get it returned as:
1def extract():
2 headers = headers
3 url = f'https://www.indeed.com/jobs?q=Network%20Architect&start=&vjk=e8bcf3fbe7498a5f'
4 r = requests.get(url,headers)
5 #return r.status_code
6 soup = BeautifulSoup(r.content, 'html.parser')
7 return soup
8
9def transform(soup):
10 for job in soup.select('.result'):
11 title = job.select_one('.jobTitle').get_text(' ')
12 company = job.find(class_='companyName').text
13 print(f'title: {title}')
14 print(f'company: {company}')
15
16
17
18c = extract()
19transform(c)
20title: new Network Architect
21company: MetroSys
22title: new Network Architect
23company: Federal Working Group
24title: new REMOTE Network Architect - CCIE
25company: CyberCoders
26title: new Network Architect SME
27company: Emergere Technologies
28title: Cybersecurity Apprentice
29company: IBM
30title: Network Engineer (NEW YORK) ONSITE ONLY NEED TO APPLY
31company: QnA Tech
32title: new Network Architect
33company: EdgeCo Holdings
34title: new Network Architect
35company: JKL Technologies, Inc.
36title: Network Architect
37company: OTELCO
38title: new Network Architect
39company: Illinois Municipal Retirement Fund (IMRF)
40title: new Network Architect, Google Enterprise Network
41company: Google
42title: new Network Infrastructure Lead Or Architect- Menlo Park CA -Ful...
43company: Xforia Technologies
44title: Network Architect
45company: Fairfax County Public Schools
46title: new Network Engineer
47company: Labatt Food Service
48title: new Network Architect (5056-3)
49company: JND
50for tag in soup.find_all('a', class_ = 'result') :
51 print(tag.get('id'))
52job_a678f3bfc20cb753
53job_eef3e4c10d979c1e
54job_faedfdbadab2f19b
55job_190a6b55b99c78f0
56job_32d20498e8fbf692
57job_aeaabb9af50f36d6
58job_92432325a24212d0
59job_819ce9d7ec6e5890
60job_d979bf7daac01528
61job_0879369d166a9b94
62job_2d377bc2e5085ad7
63job_bb8e5d0f651c072f
64job_dcff58df466f1ecb
65job_f70d55871eb1df3f
66sj_54a09e5e34e08948
67def transform(soup):
68 for job in soup.select('.result'):
69 title = job.select_one('.jobTitle').get_text(' ')
70 company = job.find(class_='companyName').text
71 tag = soup.find_all('a', class_='result')
72 for x in tag:
73 print(x.get('id'))
74 print(f'title: {title}')
75 print(f'company: {company}')
76
77def transform(soup):
78 for job in soup.select('.result'):
79 title = job.select_one('.jobTitle').get_text(' ')
80 company = job.find(class_='companyName').text
81 tag = soup.find_all('a', class_='result')
82 for x in tag:
83 print(x.get('id'))
84 print(f'title: {title}')
85 print(f'company: {company}')
86
87title
88company
89ID
90title
91company
92ID
93
ANSWER
Answered 2022-Feb-19 at 20:51You still have the job and extract information from it, so why not simply extract id from it -> job.get('id')
should work for you:
1def extract():
2 headers = headers
3 url = f'https://www.indeed.com/jobs?q=Network%20Architect&start=&vjk=e8bcf3fbe7498a5f'
4 r = requests.get(url,headers)
5 #return r.status_code
6 soup = BeautifulSoup(r.content, 'html.parser')
7 return soup
8
9def transform(soup):
10 for job in soup.select('.result'):
11 title = job.select_one('.jobTitle').get_text(' ')
12 company = job.find(class_='companyName').text
13 print(f'title: {title}')
14 print(f'company: {company}')
15
16
17
18c = extract()
19transform(c)
20title: new Network Architect
21company: MetroSys
22title: new Network Architect
23company: Federal Working Group
24title: new REMOTE Network Architect - CCIE
25company: CyberCoders
26title: new Network Architect SME
27company: Emergere Technologies
28title: Cybersecurity Apprentice
29company: IBM
30title: Network Engineer (NEW YORK) ONSITE ONLY NEED TO APPLY
31company: QnA Tech
32title: new Network Architect
33company: EdgeCo Holdings
34title: new Network Architect
35company: JKL Technologies, Inc.
36title: Network Architect
37company: OTELCO
38title: new Network Architect
39company: Illinois Municipal Retirement Fund (IMRF)
40title: new Network Architect, Google Enterprise Network
41company: Google
42title: new Network Infrastructure Lead Or Architect- Menlo Park CA -Ful...
43company: Xforia Technologies
44title: Network Architect
45company: Fairfax County Public Schools
46title: new Network Engineer
47company: Labatt Food Service
48title: new Network Architect (5056-3)
49company: JND
50for tag in soup.find_all('a', class_ = 'result') :
51 print(tag.get('id'))
52job_a678f3bfc20cb753
53job_eef3e4c10d979c1e
54job_faedfdbadab2f19b
55job_190a6b55b99c78f0
56job_32d20498e8fbf692
57job_aeaabb9af50f36d6
58job_92432325a24212d0
59job_819ce9d7ec6e5890
60job_d979bf7daac01528
61job_0879369d166a9b94
62job_2d377bc2e5085ad7
63job_bb8e5d0f651c072f
64job_dcff58df466f1ecb
65job_f70d55871eb1df3f
66sj_54a09e5e34e08948
67def transform(soup):
68 for job in soup.select('.result'):
69 title = job.select_one('.jobTitle').get_text(' ')
70 company = job.find(class_='companyName').text
71 tag = soup.find_all('a', class_='result')
72 for x in tag:
73 print(x.get('id'))
74 print(f'title: {title}')
75 print(f'company: {company}')
76
77def transform(soup):
78 for job in soup.select('.result'):
79 title = job.select_one('.jobTitle').get_text(' ')
80 company = job.find(class_='companyName').text
81 tag = soup.find_all('a', class_='result')
82 for x in tag:
83 print(x.get('id'))
84 print(f'title: {title}')
85 print(f'company: {company}')
86
87title
88company
89ID
90title
91company
92ID
93def transform(soup):
94 for job in soup.select('.result'):
95 title = job.select_one('.jobTitle').get_text(' ')
96 company = job.find(class_='companyName').text
97 id = job.get('id')
98 print(f'title: {title}')
99 print(f'company: {company}')
100 print(f'id: {id}')
101
QUESTION
Specific argument causes argparse to parse arguments incorrectly
Asked 2021-Dec-27 at 21:25I am using python argparse in a script that has so far worked perfectly. However, passing a specific filepath as an argument causes the parser to fail.
Here is my argparse setup:
1parser = argparse.ArgumentParser(prog="writeup_converter.py", description="Takes a folder of Obsidian markdown files and copies them across to a new location, automatically copying any attachments. Options available include converting to a new set of Markdown files, removing and adding prefixes to attachments, and converting for use on a website")
2
3#positional arguments
4parser.add_argument("source_folder", help="The folder of markdown files to copy from.")
5parser.add_argument("source_attachments", help="The attachments folder in your Obsidian Vault that holds attachments in the notes.")
6parser.add_argument("target_folder", help="The place to drop your converted markdown files")
7parser.add_argument("target_attachments", help="The place to drop your converted attachments. Must be set as your attachments folder in Obsidian (or just drop them in the root of your vault if you hate yourself)")
8
9#optional flags
10parser.add_argument("-r", "--remove_prefix", help="Prefix to remove from all your attachment file paths.")
11parser.add_argument("-v", "--verbose", action="store_true", help="Verbose mode. Gives details of which files are being copied. Disabled by default in case of large directories")
12parser.add_argument("-w", "--website", help="Use website formatting when files are copied. Files combined into one markdown file with HTML elements, specify the name of this file after the flag")
13parser.add_argument("-l", "--asset_rel_path", help="Relative path for site assets e.g. /assets/images/blogs/..., include this or full system path will be added to links")
14
15print(sys.argv)
16exit()
17
18#parse arguments
19args = parser.parse_args()
20
I've added the print
and exit
for debugging purposes. Previously when I run the program with this configuration, it works well - however this set of arguments produces a strange error:
1parser = argparse.ArgumentParser(prog="writeup_converter.py", description="Takes a folder of Obsidian markdown files and copies them across to a new location, automatically copying any attachments. Options available include converting to a new set of Markdown files, removing and adding prefixes to attachments, and converting for use on a website")
2
3#positional arguments
4parser.add_argument("source_folder", help="The folder of markdown files to copy from.")
5parser.add_argument("source_attachments", help="The attachments folder in your Obsidian Vault that holds attachments in the notes.")
6parser.add_argument("target_folder", help="The place to drop your converted markdown files")
7parser.add_argument("target_attachments", help="The place to drop your converted attachments. Must be set as your attachments folder in Obsidian (or just drop them in the root of your vault if you hate yourself)")
8
9#optional flags
10parser.add_argument("-r", "--remove_prefix", help="Prefix to remove from all your attachment file paths.")
11parser.add_argument("-v", "--verbose", action="store_true", help="Verbose mode. Gives details of which files are being copied. Disabled by default in case of large directories")
12parser.add_argument("-w", "--website", help="Use website formatting when files are copied. Files combined into one markdown file with HTML elements, specify the name of this file after the flag")
13parser.add_argument("-l", "--asset_rel_path", help="Relative path for site assets e.g. /assets/images/blogs/..., include this or full system path will be added to links")
14
15print(sys.argv)
16exit()
17
18#parse arguments
19args = parser.parse_args()
20PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py -v -r Cybersecurity "..\Personal-Vault\Cybersecurity\SESH\2021-22 Sessions\Shells Session Writeups\" "..\Personal-Vault\Attachments\" "..\Cybersecurity-Notes\Writeups\SESH\DVWA\" "..\Cybersecurity-Notes\Attachments\"
21usage: writeup_converter.py [-h] [-r REMOVE_PREFIX] [-v] [-w WEBSITE] [-l ASSET_REL_PATH] source_folder source_attachments target_folder target_attachments
22writeup_converter.py: error: the following arguments are required: source_attachments, target_folder, target_attachments
23
It seems to not recognise the positional arguments that are definitely present. I added those debugging statements to see what the state of the arguments were according to Python:
1parser = argparse.ArgumentParser(prog="writeup_converter.py", description="Takes a folder of Obsidian markdown files and copies them across to a new location, automatically copying any attachments. Options available include converting to a new set of Markdown files, removing and adding prefixes to attachments, and converting for use on a website")
2
3#positional arguments
4parser.add_argument("source_folder", help="The folder of markdown files to copy from.")
5parser.add_argument("source_attachments", help="The attachments folder in your Obsidian Vault that holds attachments in the notes.")
6parser.add_argument("target_folder", help="The place to drop your converted markdown files")
7parser.add_argument("target_attachments", help="The place to drop your converted attachments. Must be set as your attachments folder in Obsidian (or just drop them in the root of your vault if you hate yourself)")
8
9#optional flags
10parser.add_argument("-r", "--remove_prefix", help="Prefix to remove from all your attachment file paths.")
11parser.add_argument("-v", "--verbose", action="store_true", help="Verbose mode. Gives details of which files are being copied. Disabled by default in case of large directories")
12parser.add_argument("-w", "--website", help="Use website formatting when files are copied. Files combined into one markdown file with HTML elements, specify the name of this file after the flag")
13parser.add_argument("-l", "--asset_rel_path", help="Relative path for site assets e.g. /assets/images/blogs/..., include this or full system path will be added to links")
14
15print(sys.argv)
16exit()
17
18#parse arguments
19args = parser.parse_args()
20PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py -v -r Cybersecurity "..\Personal-Vault\Cybersecurity\SESH\2021-22 Sessions\Shells Session Writeups\" "..\Personal-Vault\Attachments\" "..\Cybersecurity-Notes\Writeups\SESH\DVWA\" "..\Cybersecurity-Notes\Attachments\"
21usage: writeup_converter.py [-h] [-r REMOVE_PREFIX] [-v] [-w WEBSITE] [-l ASSET_REL_PATH] source_folder source_attachments target_folder target_attachments
22writeup_converter.py: error: the following arguments are required: source_attachments, target_folder, target_attachments
23['.\\writeup_converter.py', '-v', '-r', 'Cybersecurity', '..\\Personal-Vault\\Cybersecurity\\SESH\\2021-22 Sessions\\Shells Session Writeups" ..\\Personal-Vault\\Attachments\\ ..\\Cybersecurity-Notes\\Writeups\\SESH\\DVWA\\ ..\\Cybersecurity-Notes\\Attachments\\']
24
As you can see, the four positional arguments have been combined into one. Experimenting further I found that the first argument specifically causes this issue:
1parser = argparse.ArgumentParser(prog="writeup_converter.py", description="Takes a folder of Obsidian markdown files and copies them across to a new location, automatically copying any attachments. Options available include converting to a new set of Markdown files, removing and adding prefixes to attachments, and converting for use on a website")
2
3#positional arguments
4parser.add_argument("source_folder", help="The folder of markdown files to copy from.")
5parser.add_argument("source_attachments", help="The attachments folder in your Obsidian Vault that holds attachments in the notes.")
6parser.add_argument("target_folder", help="The place to drop your converted markdown files")
7parser.add_argument("target_attachments", help="The place to drop your converted attachments. Must be set as your attachments folder in Obsidian (or just drop them in the root of your vault if you hate yourself)")
8
9#optional flags
10parser.add_argument("-r", "--remove_prefix", help="Prefix to remove from all your attachment file paths.")
11parser.add_argument("-v", "--verbose", action="store_true", help="Verbose mode. Gives details of which files are being copied. Disabled by default in case of large directories")
12parser.add_argument("-w", "--website", help="Use website formatting when files are copied. Files combined into one markdown file with HTML elements, specify the name of this file after the flag")
13parser.add_argument("-l", "--asset_rel_path", help="Relative path for site assets e.g. /assets/images/blogs/..., include this or full system path will be added to links")
14
15print(sys.argv)
16exit()
17
18#parse arguments
19args = parser.parse_args()
20PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py -v -r Cybersecurity "..\Personal-Vault\Cybersecurity\SESH\2021-22 Sessions\Shells Session Writeups\" "..\Personal-Vault\Attachments\" "..\Cybersecurity-Notes\Writeups\SESH\DVWA\" "..\Cybersecurity-Notes\Attachments\"
21usage: writeup_converter.py [-h] [-r REMOVE_PREFIX] [-v] [-w WEBSITE] [-l ASSET_REL_PATH] source_folder source_attachments target_folder target_attachments
22writeup_converter.py: error: the following arguments are required: source_attachments, target_folder, target_attachments
23['.\\writeup_converter.py', '-v', '-r', 'Cybersecurity', '..\\Personal-Vault\\Cybersecurity\\SESH\\2021-22 Sessions\\Shells Session Writeups" ..\\Personal-Vault\\Attachments\\ ..\\Cybersecurity-Notes\\Writeups\\SESH\\DVWA\\ ..\\Cybersecurity-Notes\\Attachments\\']
24PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py a b c d
25['.\\writeup_converter.py', 'a', 'b', 'c', 'd']
26PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py "a b" b c d
27['.\\writeup_converter.py', 'a b', 'b', 'c', 'd']
28PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py "a\ b" b c d
29['.\\writeup_converter.py', 'a\\ b', 'b', 'c', 'd']
30PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py "a\ b" "b" c d
31['.\\writeup_converter.py', 'a\\ b', 'b', 'c', 'd']
32PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py "..\Personal-Vault\Cybersecurity\SESH\2021-22 Sessions\Shells Session Writeups\" "b" c d
33['.\\writeup_converter.py', '..\\Personal-Vault\\Cybersecurity\\SESH\\2021-22 Sessions\\Shells Session Writeups" b c d']
34
As you can see, the arguments are parsed correctly until the string "..\Personal-Vault\Cybersecurity\SESH\2021-22 Sessions\Shells Session Writeups\"
is used. I can't figure out a reason for this, so any ideas would be appreciated. This behaviour occurs in both Python and CMD.
ANSWER
Answered 2021-Dec-27 at 21:25About ten seconds after posting this I realised the error thanks to Stack Overflow syntax highlighting - the backslash in the path was escaping the quotation mark. Escaping this causes argparse to behave correctly:
1parser = argparse.ArgumentParser(prog="writeup_converter.py", description="Takes a folder of Obsidian markdown files and copies them across to a new location, automatically copying any attachments. Options available include converting to a new set of Markdown files, removing and adding prefixes to attachments, and converting for use on a website")
2
3#positional arguments
4parser.add_argument("source_folder", help="The folder of markdown files to copy from.")
5parser.add_argument("source_attachments", help="The attachments folder in your Obsidian Vault that holds attachments in the notes.")
6parser.add_argument("target_folder", help="The place to drop your converted markdown files")
7parser.add_argument("target_attachments", help="The place to drop your converted attachments. Must be set as your attachments folder in Obsidian (or just drop them in the root of your vault if you hate yourself)")
8
9#optional flags
10parser.add_argument("-r", "--remove_prefix", help="Prefix to remove from all your attachment file paths.")
11parser.add_argument("-v", "--verbose", action="store_true", help="Verbose mode. Gives details of which files are being copied. Disabled by default in case of large directories")
12parser.add_argument("-w", "--website", help="Use website formatting when files are copied. Files combined into one markdown file with HTML elements, specify the name of this file after the flag")
13parser.add_argument("-l", "--asset_rel_path", help="Relative path for site assets e.g. /assets/images/blogs/..., include this or full system path will be added to links")
14
15print(sys.argv)
16exit()
17
18#parse arguments
19args = parser.parse_args()
20PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py -v -r Cybersecurity "..\Personal-Vault\Cybersecurity\SESH\2021-22 Sessions\Shells Session Writeups\" "..\Personal-Vault\Attachments\" "..\Cybersecurity-Notes\Writeups\SESH\DVWA\" "..\Cybersecurity-Notes\Attachments\"
21usage: writeup_converter.py [-h] [-r REMOVE_PREFIX] [-v] [-w WEBSITE] [-l ASSET_REL_PATH] source_folder source_attachments target_folder target_attachments
22writeup_converter.py: error: the following arguments are required: source_attachments, target_folder, target_attachments
23['.\\writeup_converter.py', '-v', '-r', 'Cybersecurity', '..\\Personal-Vault\\Cybersecurity\\SESH\\2021-22 Sessions\\Shells Session Writeups" ..\\Personal-Vault\\Attachments\\ ..\\Cybersecurity-Notes\\Writeups\\SESH\\DVWA\\ ..\\Cybersecurity-Notes\\Attachments\\']
24PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py a b c d
25['.\\writeup_converter.py', 'a', 'b', 'c', 'd']
26PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py "a b" b c d
27['.\\writeup_converter.py', 'a b', 'b', 'c', 'd']
28PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py "a\ b" b c d
29['.\\writeup_converter.py', 'a\\ b', 'b', 'c', 'd']
30PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py "a\ b" "b" c d
31['.\\writeup_converter.py', 'a\\ b', 'b', 'c', 'd']
32PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py "..\Personal-Vault\Cybersecurity\SESH\2021-22 Sessions\Shells Session Writeups\" "b" c d
33['.\\writeup_converter.py', '..\\Personal-Vault\\Cybersecurity\\SESH\\2021-22 Sessions\\Shells Session Writeups" b c d']
34PS D:\OneDrive\Documents\writeup-converter> python .\writeup_converter.py -v -r Cybersecurity "..\Personal-Vault\Cybersecurity\SESH\2021-22 Sessions\Shells Session Writeups\\" ..\Personal-Vault\Attachments\ ..\Cybersecurity-Notes\Writeups\SESH\DVWA\ ..\Cybersecurity-Notes\Attachments\
35['.\\writeup_converter.py', '-v', '-r', 'Cybersecurity', '..\\Personal-Vault\\Cybersecurity\\SESH\\2021-22 Sessions\\Shells Session Writeups\\', '..\\Personal-Vault\\Attachments\\', '..\\Cybersecurity-Notes\\Writeups\\SESH\\DVWA\\', '..\\Cybersecurity-Notes\\Attachments\\']
36
QUESTION
How do I adjust my tibble to get a grouped bar chart in ggplot2?
Asked 2021-Nov-22 at 04:25I think the code itself I'm using for a grouped barchart is roughly correct. However, my tibble doesn't have a way to call the three categories I need (Views, Interactions, and Comments). I have a conceptual issue in making this work.
This is what I'm trying to execute in ggplot2:
1bp_vic <- ggplot(data, aes(x = Day, y = value, fill = category)) +
2 geom_bar(position = 'dodge', stat = 'identity')
3bp_vic
4
Value and fill may be off. However, I think the main issue is not having a proper category call.
My tibble has six columns. The last three are what I'm trying to plot. Integer counts for Views, Interactions, and Comments.
This is my script file and this is the CSV I'm generating my tibble from.
I have successfully executed this for individual columns only:
1bp_vic <- ggplot(data, aes(x = Day, y = value, fill = category)) +
2 geom_bar(position = 'dodge', stat = 'identity')
3bp_vic
4bp_v <- ggplot(data, aes(x = Day, y = Views)) + geom_col()
5bp_v
6
1bp_vic <- ggplot(data, aes(x = Day, y = value, fill = category)) +
2 geom_bar(position = 'dodge', stat = 'identity')
3bp_vic
4bp_v <- ggplot(data, aes(x = Day, y = Views)) + geom_col()
5bp_v
6dput(data)
7structure(list(Day = c(-3L, -2L, -1L, 1L, 2L, 3L, 4L, 5L, 6L,
87L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L,
920L, 21L, 22L, 23L, 24L, 25L), Category = c("SpaceForce", "CyberSecurity",
10"Celebration", "Update", "Update", "SpaceNews", "Data", "USSFExplained",
11"USSFExplained", "USSFExplained", "USSFExplained", "USSFExplained",
12"USSFExplained", "USSFExplained", "Nostalgia", "Data", "Publishing",
13"SpaceForce", "Military", "SpaceNews", "Publishing", "Office",
14"Office", "Office", "Office", "Data", "Update", "Space"), Type = c("Share",
15"Photo_1", "Photo_5", "Text", "Text", "Text", "Photo_1", "Text",
16"Photo_1", "Photo_1", "Photo_1", "Text", "Text", "Text", "Photo_1",
17"Photo_1", "Text", "Text", "Text", "Photo_3", "Text", "Text",
18"Photo_3", "Text", "Text", "Photo_3", "Photo_1", "Photo_1"),
19 Views = c(26L, 99L, 7106L, 517L, 655L, 828L, 2183L, 911L,
20 467L, 247L, 299L, 245L, 674L, 668L, 721L, 1358L, 383L, 701L,
21 281L, 1339L, 770L, 373L, 482L, 386L, 166L, 454L, 366L, 318L
22 ), Interactions = c(0L, 0L, 125L, 8L, 10L, 9L, 16L, 17L,
23 10L, 9L, 9L, 7L, 10L, 8L, 9L, 10L, 13L, 9L, 11L, 18L, 13L,
24 6L, 4L, 9L, 4L, 11L, 6L, 10L), Comments = c(0L, 0L, 35L,
25 4L, 12L, 11L, 7L, 10L, 9L, 1L, 2L, 4L, 8L, 5L, 2L, 11L, 10L,
26 13L, 0L, 19L, 9L, 5L, 4L, 4L, 0L, 8L, 5L, 6L)), class = "data.frame", row.names = c(NA,
27-28L))
28
ANSWER
Answered 2021-Nov-22 at 04:25You want to use tidyverse
to put the data into a useable (and tidy) format, before trying to plot the data.
1bp_vic <- ggplot(data, aes(x = Day, y = value, fill = category)) +
2 geom_bar(position = 'dodge', stat = 'identity')
3bp_vic
4bp_v <- ggplot(data, aes(x = Day, y = Views)) + geom_col()
5bp_v
6dput(data)
7structure(list(Day = c(-3L, -2L, -1L, 1L, 2L, 3L, 4L, 5L, 6L,
87L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L,
920L, 21L, 22L, 23L, 24L, 25L), Category = c("SpaceForce", "CyberSecurity",
10"Celebration", "Update", "Update", "SpaceNews", "Data", "USSFExplained",
11"USSFExplained", "USSFExplained", "USSFExplained", "USSFExplained",
12"USSFExplained", "USSFExplained", "Nostalgia", "Data", "Publishing",
13"SpaceForce", "Military", "SpaceNews", "Publishing", "Office",
14"Office", "Office", "Office", "Data", "Update", "Space"), Type = c("Share",
15"Photo_1", "Photo_5", "Text", "Text", "Text", "Photo_1", "Text",
16"Photo_1", "Photo_1", "Photo_1", "Text", "Text", "Text", "Photo_1",
17"Photo_1", "Text", "Text", "Text", "Photo_3", "Text", "Text",
18"Photo_3", "Text", "Text", "Photo_3", "Photo_1", "Photo_1"),
19 Views = c(26L, 99L, 7106L, 517L, 655L, 828L, 2183L, 911L,
20 467L, 247L, 299L, 245L, 674L, 668L, 721L, 1358L, 383L, 701L,
21 281L, 1339L, 770L, 373L, 482L, 386L, 166L, 454L, 366L, 318L
22 ), Interactions = c(0L, 0L, 125L, 8L, 10L, 9L, 16L, 17L,
23 10L, 9L, 9L, 7L, 10L, 8L, 9L, 10L, 13L, 9L, 11L, 18L, 13L,
24 6L, 4L, 9L, 4L, 11L, 6L, 10L), Comments = c(0L, 0L, 35L,
25 4L, 12L, 11L, 7L, 10L, 9L, 1L, 2L, 4L, 8L, 5L, 2L, 11L, 10L,
26 13L, 0L, 19L, 9L, 5L, 4L, 4L, 0L, 8L, 5L, 6L)), class = "data.frame", row.names = c(NA,
27-28L))
28library(tidyverse)
29
30data <-
31 data %>%
32 tidyr::pivot_longer(
33 cols = c(Views, Interactions, Comments),
34 names_to = "Section",
35 values_to = "values"
36 )
37
New format
1bp_vic <- ggplot(data, aes(x = Day, y = value, fill = category)) +
2 geom_bar(position = 'dodge', stat = 'identity')
3bp_vic
4bp_v <- ggplot(data, aes(x = Day, y = Views)) + geom_col()
5bp_v
6dput(data)
7structure(list(Day = c(-3L, -2L, -1L, 1L, 2L, 3L, 4L, 5L, 6L,
87L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L,
920L, 21L, 22L, 23L, 24L, 25L), Category = c("SpaceForce", "CyberSecurity",
10"Celebration", "Update", "Update", "SpaceNews", "Data", "USSFExplained",
11"USSFExplained", "USSFExplained", "USSFExplained", "USSFExplained",
12"USSFExplained", "USSFExplained", "Nostalgia", "Data", "Publishing",
13"SpaceForce", "Military", "SpaceNews", "Publishing", "Office",
14"Office", "Office", "Office", "Data", "Update", "Space"), Type = c("Share",
15"Photo_1", "Photo_5", "Text", "Text", "Text", "Photo_1", "Text",
16"Photo_1", "Photo_1", "Photo_1", "Text", "Text", "Text", "Photo_1",
17"Photo_1", "Text", "Text", "Text", "Photo_3", "Text", "Text",
18"Photo_3", "Text", "Text", "Photo_3", "Photo_1", "Photo_1"),
19 Views = c(26L, 99L, 7106L, 517L, 655L, 828L, 2183L, 911L,
20 467L, 247L, 299L, 245L, 674L, 668L, 721L, 1358L, 383L, 701L,
21 281L, 1339L, 770L, 373L, 482L, 386L, 166L, 454L, 366L, 318L
22 ), Interactions = c(0L, 0L, 125L, 8L, 10L, 9L, 16L, 17L,
23 10L, 9L, 9L, 7L, 10L, 8L, 9L, 10L, 13L, 9L, 11L, 18L, 13L,
24 6L, 4L, 9L, 4L, 11L, 6L, 10L), Comments = c(0L, 0L, 35L,
25 4L, 12L, 11L, 7L, 10L, 9L, 1L, 2L, 4L, 8L, 5L, 2L, 11L, 10L,
26 13L, 0L, 19L, 9L, 5L, 4L, 4L, 0L, 8L, 5L, 6L)), class = "data.frame", row.names = c(NA,
27-28L))
28library(tidyverse)
29
30data <-
31 data %>%
32 tidyr::pivot_longer(
33 cols = c(Views, Interactions, Comments),
34 names_to = "Section",
35 values_to = "values"
36 )
37head(data)
38# A tibble: 6 × 5
39 Day Category Type Section values
40 <int> <chr> <chr> <chr> <int>
411 -3 SpaceForce Share Views 26
422 -3 SpaceForce Share Interactions 0
433 -3 SpaceForce Share Comments 0
444 -2 CyberSecurity Photo_1 Views 99
455 -2 CyberSecurity Photo_1 Interactions 0
466 -2 CyberSecurity Photo_1 Comments 0
47
Then, you can plot the grouped bar chart.
1bp_vic <- ggplot(data, aes(x = Day, y = value, fill = category)) +
2 geom_bar(position = 'dodge', stat = 'identity')
3bp_vic
4bp_v <- ggplot(data, aes(x = Day, y = Views)) + geom_col()
5bp_v
6dput(data)
7structure(list(Day = c(-3L, -2L, -1L, 1L, 2L, 3L, 4L, 5L, 6L,
87L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L,
920L, 21L, 22L, 23L, 24L, 25L), Category = c("SpaceForce", "CyberSecurity",
10"Celebration", "Update", "Update", "SpaceNews", "Data", "USSFExplained",
11"USSFExplained", "USSFExplained", "USSFExplained", "USSFExplained",
12"USSFExplained", "USSFExplained", "Nostalgia", "Data", "Publishing",
13"SpaceForce", "Military", "SpaceNews", "Publishing", "Office",
14"Office", "Office", "Office", "Data", "Update", "Space"), Type = c("Share",
15"Photo_1", "Photo_5", "Text", "Text", "Text", "Photo_1", "Text",
16"Photo_1", "Photo_1", "Photo_1", "Text", "Text", "Text", "Photo_1",
17"Photo_1", "Text", "Text", "Text", "Photo_3", "Text", "Text",
18"Photo_3", "Text", "Text", "Photo_3", "Photo_1", "Photo_1"),
19 Views = c(26L, 99L, 7106L, 517L, 655L, 828L, 2183L, 911L,
20 467L, 247L, 299L, 245L, 674L, 668L, 721L, 1358L, 383L, 701L,
21 281L, 1339L, 770L, 373L, 482L, 386L, 166L, 454L, 366L, 318L
22 ), Interactions = c(0L, 0L, 125L, 8L, 10L, 9L, 16L, 17L,
23 10L, 9L, 9L, 7L, 10L, 8L, 9L, 10L, 13L, 9L, 11L, 18L, 13L,
24 6L, 4L, 9L, 4L, 11L, 6L, 10L), Comments = c(0L, 0L, 35L,
25 4L, 12L, 11L, 7L, 10L, 9L, 1L, 2L, 4L, 8L, 5L, 2L, 11L, 10L,
26 13L, 0L, 19L, 9L, 5L, 4L, 4L, 0L, 8L, 5L, 6L)), class = "data.frame", row.names = c(NA,
27-28L))
28library(tidyverse)
29
30data <-
31 data %>%
32 tidyr::pivot_longer(
33 cols = c(Views, Interactions, Comments),
34 names_to = "Section",
35 values_to = "values"
36 )
37head(data)
38# A tibble: 6 × 5
39 Day Category Type Section values
40 <int> <chr> <chr> <chr> <int>
411 -3 SpaceForce Share Views 26
422 -3 SpaceForce Share Interactions 0
433 -3 SpaceForce Share Comments 0
444 -2 CyberSecurity Photo_1 Views 99
455 -2 CyberSecurity Photo_1 Interactions 0
466 -2 CyberSecurity Photo_1 Comments 0
47ggplot(data, aes(fill = Section, y = values, x = Day)) +
48 geom_bar(position = "dodge", stat = "identity")
49
Output (though difficult to see most because of the 1 really high value)
Or you could easily plot the Category
rather than the day if needed too, by having x = Category
instead of x = Day
.
If you would like to change the order of the categories, then you can make Category
a factor, which you can do without changing the dataframe.
1bp_vic <- ggplot(data, aes(x = Day, y = value, fill = category)) +
2 geom_bar(position = 'dodge', stat = 'identity')
3bp_vic
4bp_v <- ggplot(data, aes(x = Day, y = Views)) + geom_col()
5bp_v
6dput(data)
7structure(list(Day = c(-3L, -2L, -1L, 1L, 2L, 3L, 4L, 5L, 6L,
87L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L,
920L, 21L, 22L, 23L, 24L, 25L), Category = c("SpaceForce", "CyberSecurity",
10"Celebration", "Update", "Update", "SpaceNews", "Data", "USSFExplained",
11"USSFExplained", "USSFExplained", "USSFExplained", "USSFExplained",
12"USSFExplained", "USSFExplained", "Nostalgia", "Data", "Publishing",
13"SpaceForce", "Military", "SpaceNews", "Publishing", "Office",
14"Office", "Office", "Office", "Data", "Update", "Space"), Type = c("Share",
15"Photo_1", "Photo_5", "Text", "Text", "Text", "Photo_1", "Text",
16"Photo_1", "Photo_1", "Photo_1", "Text", "Text", "Text", "Photo_1",
17"Photo_1", "Text", "Text", "Text", "Photo_3", "Text", "Text",
18"Photo_3", "Text", "Text", "Photo_3", "Photo_1", "Photo_1"),
19 Views = c(26L, 99L, 7106L, 517L, 655L, 828L, 2183L, 911L,
20 467L, 247L, 299L, 245L, 674L, 668L, 721L, 1358L, 383L, 701L,
21 281L, 1339L, 770L, 373L, 482L, 386L, 166L, 454L, 366L, 318L
22 ), Interactions = c(0L, 0L, 125L, 8L, 10L, 9L, 16L, 17L,
23 10L, 9L, 9L, 7L, 10L, 8L, 9L, 10L, 13L, 9L, 11L, 18L, 13L,
24 6L, 4L, 9L, 4L, 11L, 6L, 10L), Comments = c(0L, 0L, 35L,
25 4L, 12L, 11L, 7L, 10L, 9L, 1L, 2L, 4L, 8L, 5L, 2L, 11L, 10L,
26 13L, 0L, 19L, 9L, 5L, 4L, 4L, 0L, 8L, 5L, 6L)), class = "data.frame", row.names = c(NA,
27-28L))
28library(tidyverse)
29
30data <-
31 data %>%
32 tidyr::pivot_longer(
33 cols = c(Views, Interactions, Comments),
34 names_to = "Section",
35 values_to = "values"
36 )
37head(data)
38# A tibble: 6 × 5
39 Day Category Type Section values
40 <int> <chr> <chr> <chr> <int>
411 -3 SpaceForce Share Views 26
422 -3 SpaceForce Share Interactions 0
433 -3 SpaceForce Share Comments 0
444 -2 CyberSecurity Photo_1 Views 99
455 -2 CyberSecurity Photo_1 Interactions 0
466 -2 CyberSecurity Photo_1 Comments 0
47ggplot(data, aes(fill = Section, y = values, x = Day)) +
48 geom_bar(position = "dodge", stat = "identity")
49# Create order for the categories. If you want to do it by the number of views, then you can create the list from your dataframe.
50level_order <- data %>%
51 dplyr::filter(Section == "Views") %>%
52 dplyr::arrange(desc(values)) %>%
53 pull(Category) %>%
54 unique()
55
56# Then, set category as a factor and include the ordered categories.
57ggplot(data, aes(fill = Section, y = values, x = factor(Category, level = level_order))) +
58 geom_bar(position = "dodge", stat = "identity")
59
QUESTION
how to make a model fit the dataset in Keras?
Asked 2021-Nov-14 at 20:30the idea is to make a program that can detect if there is attack happened or not
i got stuck in fitting the model
libraries imported
1from keras.models import Sequential
2from keras.layers import Dense
3from keras.layers import Flatten
4from keras.layers.convolutional import Conv1D
5from keras.layers.convolutional import MaxPooling1D
6from keras.layers.embeddings import Embedding
7from keras.preprocessing import sequence
8import pandas as pd
9
https://www.unsw.adfa.edu.au/unsw-canberra-cyber/cybersecurity/ADFA-NB15-Datasets/bot_iot.php
https://ieee-dataport.org/documents/bot-iot-dataset
as you can see in attack column i want the program to tell if an attack happened or not
this is the model
1from keras.models import Sequential
2from keras.layers import Dense
3from keras.layers import Flatten
4from keras.layers.convolutional import Conv1D
5from keras.layers.convolutional import MaxPooling1D
6from keras.layers.embeddings import Embedding
7from keras.preprocessing import sequence
8import pandas as pd
9model = Sequential()
10model.add(Conv1D(128, 5, activation='relu'))
11model.add(MaxPooling1D())
12model.add(Dense(12, input_dim=8, activation='relu'))
13model.add(Dense(10,activation='relu'))
14model.add(Dense(1,activation='sigmoid'))
15model.add(Flatten())
16
and the model compile
1from keras.models import Sequential
2from keras.layers import Dense
3from keras.layers import Flatten
4from keras.layers.convolutional import Conv1D
5from keras.layers.convolutional import MaxPooling1D
6from keras.layers.embeddings import Embedding
7from keras.preprocessing import sequence
8import pandas as pd
9model = Sequential()
10model.add(Conv1D(128, 5, activation='relu'))
11model.add(MaxPooling1D())
12model.add(Dense(12, input_dim=8, activation='relu'))
13model.add(Dense(10,activation='relu'))
14model.add(Dense(1,activation='sigmoid'))
15model.add(Flatten())
16model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
17
model fitting part (here is my issue)
1from keras.models import Sequential
2from keras.layers import Dense
3from keras.layers import Flatten
4from keras.layers.convolutional import Conv1D
5from keras.layers.convolutional import MaxPooling1D
6from keras.layers.embeddings import Embedding
7from keras.preprocessing import sequence
8import pandas as pd
9model = Sequential()
10model.add(Conv1D(128, 5, activation='relu'))
11model.add(MaxPooling1D())
12model.add(Dense(12, input_dim=8, activation='relu'))
13model.add(Dense(10,activation='relu'))
14model.add(Dense(1,activation='sigmoid'))
15model.add(Flatten())
16model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
17model.fit(train, test, epochs=50, batch_size=30)
18
19
Error:
1from keras.models import Sequential
2from keras.layers import Dense
3from keras.layers import Flatten
4from keras.layers.convolutional import Conv1D
5from keras.layers.convolutional import MaxPooling1D
6from keras.layers.embeddings import Embedding
7from keras.preprocessing import sequence
8import pandas as pd
9model = Sequential()
10model.add(Conv1D(128, 5, activation='relu'))
11model.add(MaxPooling1D())
12model.add(Dense(12, input_dim=8, activation='relu'))
13model.add(Dense(10,activation='relu'))
14model.add(Dense(1,activation='sigmoid'))
15model.add(Flatten())
16model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
17model.fit(train, test, epochs=50, batch_size=30)
18
19ValueError: Data cardinality is ambiguous:
20 x sizes: 2934817
21 y sizes: 733705
22Make sure all arrays contain the same number of samples.
23
from the error message its clear the files are not the same row quantity
so i tried to take only the test file only and made 2 parts of it the first part
from column 0 to 16
the other is 16
1from keras.models import Sequential
2from keras.layers import Dense
3from keras.layers import Flatten
4from keras.layers.convolutional import Conv1D
5from keras.layers.convolutional import MaxPooling1D
6from keras.layers.embeddings import Embedding
7from keras.preprocessing import sequence
8import pandas as pd
9model = Sequential()
10model.add(Conv1D(128, 5, activation='relu'))
11model.add(MaxPooling1D())
12model.add(Dense(12, input_dim=8, activation='relu'))
13model.add(Dense(10,activation='relu'))
14model.add(Dense(1,activation='sigmoid'))
15model.add(Flatten())
16model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
17model.fit(train, test, epochs=50, batch_size=30)
18
19ValueError: Data cardinality is ambiguous:
20 x sizes: 2934817
21 y sizes: 733705
22Make sure all arrays contain the same number of samples.
23x = test.iloc[:,0:16]
24y = test.iloc[:,16]
25
1from keras.models import Sequential
2from keras.layers import Dense
3from keras.layers import Flatten
4from keras.layers.convolutional import Conv1D
5from keras.layers.convolutional import MaxPooling1D
6from keras.layers.embeddings import Embedding
7from keras.preprocessing import sequence
8import pandas as pd
9model = Sequential()
10model.add(Conv1D(128, 5, activation='relu'))
11model.add(MaxPooling1D())
12model.add(Dense(12, input_dim=8, activation='relu'))
13model.add(Dense(10,activation='relu'))
14model.add(Dense(1,activation='sigmoid'))
15model.add(Flatten())
16model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
17model.fit(train, test, epochs=50, batch_size=30)
18
19ValueError: Data cardinality is ambiguous:
20 x sizes: 2934817
21 y sizes: 733705
22Make sure all arrays contain the same number of samples.
23x = test.iloc[:,0:16]
24y = test.iloc[:,16]
25model.fit(x, y, epochs=50, batch_size=30)
26
27
Error:
1from keras.models import Sequential
2from keras.layers import Dense
3from keras.layers import Flatten
4from keras.layers.convolutional import Conv1D
5from keras.layers.convolutional import MaxPooling1D
6from keras.layers.embeddings import Embedding
7from keras.preprocessing import sequence
8import pandas as pd
9model = Sequential()
10model.add(Conv1D(128, 5, activation='relu'))
11model.add(MaxPooling1D())
12model.add(Dense(12, input_dim=8, activation='relu'))
13model.add(Dense(10,activation='relu'))
14model.add(Dense(1,activation='sigmoid'))
15model.add(Flatten())
16model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
17model.fit(train, test, epochs=50, batch_size=30)
18
19ValueError: Data cardinality is ambiguous:
20 x sizes: 2934817
21 y sizes: 733705
22Make sure all arrays contain the same number of samples.
23x = test.iloc[:,0:16]
24y = test.iloc[:,16]
25model.fit(x, y, epochs=50, batch_size=30)
26
27ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type int).
28
i have tried to make it all as float but it didn't work out still have the same problem
ANSWER
Answered 2021-Nov-14 at 14:36The first problem I'm finding is that when using .fit()
you need to pass the x
and y
values, not the train
and test
sets and that's why you are getting the the error. Keras is trying to predict your full test dataset based on the train dataset which of course makes no sense.
The second error seems like you are passing the right variables to the model (the last column being the target, defined as y and the predictors defined as x) however there seems to be an issue on how the data is formatted. Without access to the data it's hard to solve it. Are all columns numerical? If so, as addressed here this might help do the trick:
1from keras.models import Sequential
2from keras.layers import Dense
3from keras.layers import Flatten
4from keras.layers.convolutional import Conv1D
5from keras.layers.convolutional import MaxPooling1D
6from keras.layers.embeddings import Embedding
7from keras.preprocessing import sequence
8import pandas as pd
9model = Sequential()
10model.add(Conv1D(128, 5, activation='relu'))
11model.add(MaxPooling1D())
12model.add(Dense(12, input_dim=8, activation='relu'))
13model.add(Dense(10,activation='relu'))
14model.add(Dense(1,activation='sigmoid'))
15model.add(Flatten())
16model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
17model.fit(train, test, epochs=50, batch_size=30)
18
19ValueError: Data cardinality is ambiguous:
20 x sizes: 2934817
21 y sizes: 733705
22Make sure all arrays contain the same number of samples.
23x = test.iloc[:,0:16]
24y = test.iloc[:,16]
25model.fit(x, y, epochs=50, batch_size=30)
26
27ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type int).
28x = np.asarray(x).astype('float32')
29
If the data is not numeric across all entry points, then you might need to some bit of preprocessing in order to ensure it is fully numerical. Some alternatives worth looking into might be:
- One hot encoding which can be easily applied from sklearn.
- Pandas' get dummies which you can use and pass directly the non-numerical columns.
Once your dataset is all of numerical types, you should be able to use it to train the model without issues.
QUESTION
How to change my css to make hyper link visible [ with minimum sample code ]?
Asked 2021-Oct-12 at 15:34I have a site with CSS, but the hyper links are not visible [ right side ], how to change my html/css so that the hyper links are visible [ like the left side on the follow image ] ?
I've simplified my site to show the problem and here is the minimum sample code :
1<!DOCTYPE html>
2<html lang="en">
3 <head>
4 <meta charset="UTF-8">
5 <meta name="viewport" content="width=device-width, initial-scale=1">
6 <title>GATE Cyber Technology : Award Winning Innovation For Identity And Access Management</title>
7 <meta name="description" content="GATE Cyber Technology LLC. INTERCEPTION-RESISTANT AUTHENTICATION AND ENCRYPTION SYSTEM AND METHOD. Introducing a breakthrough digital security innovation : Graphic Access Tabular Entry [ GATE ], an interception-resistant authentication and encryption system and method. With the GATE system you are not afraid that you are watched when you enter passwords, and you are not afraid that the password will be intercepted, the GATE innovative method is designed to be peek-resistant and interception-resistant. The GATE system and method will offer you better digital security. Identity and Access Management (IAM)">
8 <meta name="keywords" content="GATE Cyber Technology LLC. INTERCEPTION-resistant AUTHENTICATION AND ENCRYPTION SYSTEM AND METHOD, Graphic Access Tabular Entry [ GATE ], GATE security, GATE authentication, GATE login, GATE user authentication, GATE password, GATE passcode, peek-resistant, online security, digital security, passwords, password protection, strong password, strong cybersecurity, strong user authentication, prevent password loss, prevent user credential loss, passcode, cyber security, pin, login, logon, digital access, online access, access control, online protection, digital protection, online defence, digital defence, message encryption, message decryption, signal encryption, signal decryption, overcome weakness of traditional password, the GATE system, award winning, better than fingerprinting, better than iris scanning, safer than keyfob, better than password manager, safer password entry, Identity and Access Management (IAM), GATE defeats wiretapping, keylogging, peeking, phishing and dictionary attack, no restrictions of traditional password's lowercase, uppercase, numbers and special characters requirements, easy to use">
9 <meta name="google-site-verification" content="cXY5hsdt7XCjR_k96nha7Hn5uW4fw_1u6mc2LWDyAQ0" />
10 <link rel="shortcut icon" href="favicon.ico">
11
12 <link media="all" type="text/css" rel="stylesheet" href="https://cdn.ahrefs.com/assets/css/bootstrap.min.css">
13 <link media="all" type="text/css" rel="stylesheet" href="https://fonts.googleapis.com/css?family=Lato:400,300,100italic,100,300italic,400italic,700,700italic,900,900italic">
14 <link media="screen" type="text/css" rel="stylesheet" href="https://cdn.ahrefs.com/assets/css/home-responsive.css?20180815-001">
15
16 <link media="all" type="text/css" rel="stylesheet" href="css/bootstrap.min.css">
17 <link media="all" type="text/css" rel="stylesheet" href="css/css.css">
18 <link media="screen" type="text/css" rel="stylesheet" href="css/home-responsive.css">
19
20 <meta property="og:image" content="GATE_1.PNG">
21
22 <style>
23 div.Intro
24 {
25 font-size: 100%;
26 text-align: left;
27 }
28
29 div.Table
30 {
31 font-size: 218%;
32 text-align: center;
33 }
34
35 a:hover { color:#ddeeff; }
36 a:visited { color:#E8E8E8 }
37
38 tr a{ font-size: 18px;color:#aabbcc; }
39 tr a:hover { color:#ddeeff; }
40
41 .pic-container-1{display:block; position:relative; }
42 .pic-container-1 .pic-box{display:block;}
43 .pic-container-1 .pic-box img{display:block;}
44 .pic-container-1 .pic-hover{position:absolute; top:0px; left:104px; display:none;}
45 .pic-container-1:hover .pic-hover{display:block;}
46
47 .pic-container-2{display:block; position:relative; }
48 .pic-container-2 .pic-box{display:block;}
49 .pic-container-2 .pic-box img{display:block;}
50 .pic-container-2 .pic-hover{position:absolute; top:0px; left:18px; display:none;}
51 .pic-container-2:hover .pic-hover{display:block;}
52
53 .pic-container-3{display:block; position:relative; }
54 .pic-container-3 .pic-box{display:block;}
55 .pic-container-3 .pic-box img{display:block;}
56 .pic-container-3 .pic-hover{position:absolute; top:0px; left:20px; display:none;}
57 .pic-container-3:hover .pic-hover{display:block;}
58
59 .pic-container-4{display:block; position:relative; }
60 .pic-container-4 .pic-box{display:block;}
61 .pic-container-4 .pic-box img{display:block;}
62 .pic-container-4 .pic-hover{position:absolute; top:0px; left:18px; display:none;}
63 .pic-container-4:hover .pic-hover{display:block;}
64
65 #GATE_Frame_1 { width: 78%; height: auto; }
66 #GATE_Frame_2 { width: 98%; height: auto; }
67
68 #Balance { width: 80%; height: auto; }
69
70 #Ted_Murphree_img { width: 36vw; height: auto; }
71 #Scott_Schober_img { width: 36vw; height: auto; }
72 #Cary_Pool_img { width: 36vw; height: auto; }
73 #Eduard_B_img { width: 36vw; height: auto; }
74 #Jonathan_Rosenoer_img { width: 36vw; height: auto; }
75
76 #Traditional_vs_GATE_1 { width: 96%; height: auto; }
77 #Traditional_vs_GATE_2 { width: 99.5%; height: auto; }
78
79 #modal
80 {
81 display: none;
82 position: fixed;
83 width: 100vw;
84 height: 100vh;
85 max-height: 100vh;
86 top: 0;
87 left: 0;
88 background: rgba(24, 24, 24, .6);
89 z-index: 999;
90 }
91 #modal .content
92 {
93 position: relative;
94 width: 55%;
95 height: 65vh;
96 margin: auto; /* allows horyzontal and vertical alignment as .content is in flex container */
97 }
98 #modal .content .yt-video
99 {
100 display: block;
101 width: 100%;
102 height: calc(100% - 45px);
103 }
104 #modal .content .title
105 {
106 box-sizing: border-box;
107 height: 45px;
108 line-height: 23px;
109 padding: 12px 4px;
110 margin: 0;
111 background: #007bff;
112 color: #fff;
113 text-align: center;
114 font-size: 26px;
115 max-width: 100%;
116 white-space: nowrap;
117 overflow: hidden;
118 text-overflow: ellipsis;
119 }
120 #modal .close
121 {
122 position: absolute;
123 top: 0;
124 right: 0;
125 width: 45px;
126 height: 45px;
127 line-height: 36px;
128 text-align: center;
129 border: 0;
130 font-weight: bold;
131 font-size: 38px;
132 color: #fff;
133 background: #366;
134 cursor: pointer;
135 transition: background .2s;
136 }
137 #modal .content .close .a { font-size:38px;color: #ffffff; }
138 #modal .close:hover, #modal .close:active { background: #ff0000; }
139 #modal.is-visible { display: flex; }
140
141 html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li,
142 fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, article, aside, canvas, details, embed, figure, figcaption, footer, header, hgroup, menu, nav, output, ruby, section, summary, time, mark, audio, video
143 {
144 margin: 0;
145 padding: 0;
146 border: 0;
147 font-size: 100%;
148 font: inherit;
149 vertical-align: middle;
150 }
151
152 /* HTML5 display-role reset for older browsers */
153 article, aside, details, figcaption, figure, footer, header, hgroup, menu, nav, section { display: block; }
154 body { line-height: 1; }
155 // ol, ul { list-style: none; }
156 blockquote, q { quotes: none; }
157 blockquote:before, blockquote:after,
158 q:before, q:after
159 {
160 content: '';
161 content: none;
162 }
163 table
164 {
165 border-collapse: collapse;
166 border-spacing: 0;
167 }
168 </style>
169 </head>
170
171 <body class="page__guest ahrefs page-home">
172 <div id="localizejs">
173 <div class="content">
174 <a id="Awards"></a>
175 <div class="datas">
176 <div class="container center">
177 <Table Cellpadding=6>
178 <Tr>
179 <Td Align=Center><Br>
180 <Font Color=white><A Href=http://bestech.ittn.com.cn/#/projectlist2021 target=_blank>GATE has been selected</A> to the <A Href="2021_ZGC_Top_100_List_1.PNG" target=_blank>top 100</A>,<Br> among more than 2800 technologies collected<Br> from all over the world at 2021 ZGC<Br><A Href=http://bestech.ittn.com.cn/#/home target=_blank>International Technology Trade Conference</A>.</Font>
181 </Td>
182 </Tr>
183 </Table>
184 </div>
185 </div>
186
187 <!-- the modal div that will open when an anchor link is clicked to show the related video in an iframe. -->
188
189 <div id="modal">
190 <div class="content">
191 <div class="close"><a onclick = "return close_iFrame();">&times;</a></div>
192 <h4 class="title">.</h4>
193 <iframe class="yt-video" allowfullscreen></iframe>
194 </div>
195 </div>
196
197 <script>
198 var modal = document.getElementById('modal'),
199 closeBtn = modal.querySelector('close'),
200 ytVideo = modal.querySelector('.content .yt-video'),
201 title = modal.querySelector('.content .title'),
202 anchors = document.querySelectorAll('a[data-target="modal"]'),
203 l = anchors.length;
204
205 for (var i = 0; i < l; i++)
206 {
207 anchors[i].addEventListener("click", function (e)
208 {
209 e.preventDefault();
210 title.textContent = this.dataset.videoTitle || 'No title';
211 ytVideo.src = this.href;
212 modal.classList.toggle('is-visible');
213 modal.focus();
214 });
215 }
216
217 modal.addEventListener("keydown", function (e)
218 {
219 if (e.keyCode == 27)
220 {
221 title.textContent = '';
222 ytVideo.src = '';
223 this.classList.toggle('is-visible');
224 }
225 });
226 </script>
227
228 <script type="text/javascript">
229
230 function close_iFrame()
231 {
232 var modal = document.getElementById('modal'),
233 ytVideo = modal.querySelector('.content .yt-video');
234
235 ytVideo.src = '';
236 modal.classList.toggle('is-visible');
237
238 // Opera 8.0+
239 var isOpera = (!!window.opr && !!opr.addons) || !!window.opera || navigator.userAgent.indexOf(' OPR/') >= 0;
240
241 // Firefox 1.0+
242 var isFirefox = typeof InstallTrigger !== 'undefined';
243
244 // Safari 3.0+ "[object HTMLElementConstructor]"
245 var isSafari = /constructor/i.test(window.HTMLElement) || (function (p) { return p.toString() === "[object SafariRemoteNotification]"; })(!window['safari'] || safari.pushNotification);
246
247 // Internet Explorer 6-11
248 var isIE = /*@cc_on!@*/false || !!document.documentMode;
249
250 // Edge 20+
251 var isEdge = !isIE && !!window.StyleMedia;
252
253 // Chrome 1+
254 var isChrome = !!window.chrome && !!window.chrome.webstore;
255
256 // Blink engine detection
257 var isBlink = (isChrome || isOpera) && !!window.CSS;
258
259 var output = 'Detecting browsers by ducktyping :\n===========================\n';
260 output+='isChrome: '+isChrome+'\n'; // 57.8 % Market Share
261 output+='isSafari: '+isSafari+'\n'; // 14.0 %
262 output+='isFirefox: '+isFirefox+'\n'; // 6.0 %
263 output+='isIE: '+isIE+'\n';
264 output+='isEdge: '+isEdge+'\n'; // 5.9 % IE + Edge
265 output+='isOpera: '+isOpera+'\n'; // 3.7 %
266 output+='isBlink: '+isBlink+'\n';
267
268// alert(output+'[ history.length = '+history.length+' ]');
269
270 if (isChrome) // 57.8 % [ Will work correctly only after 3rd+ time of going to the #Videos section ]
271 {
272/*
273[1] No code : after 1st play, "back" plays sound
274 after 2nd play, "back" also plays sound, remembers history
275 after play 2 videos, 1 "back" plays last vodeo, 2 "back" does nothing, 3 "back" plays 2nd last video
276 Seems to remember [ empty ] + [ video ]
277
278Memory pattern : Top [video_1] [ ] [video_2] ?
279*/
280
281 if (!sessionStorage.getItem("runOnce")) // 1st time : Remembers 1st video // 2nd time : back to top after closing iFrame // 3rd time+ : works correctly
282 {
283 // alert('runOnce');
284 window.history.replaceState({},"Videos","#Videos");
285// window.location.href='#Videos';
286// history.go(0);
287 sessionStorage.setItem("runOnce",true);
288 }
289 else
290 {
291 window.history.replaceState({},"Videos","#Videos");
292 history.go(-1);
293 }
294
295 }
296 else if (isSafari) // 14.0
297 {
298
299 }
300 else if (isFirefox) // 6.0 % [ Works correctly ]
301 {
302 history.go(-1);
303 }
304 else if (isIE)
305 {
306 window.history.replaceState({},"Videos","#Videos");
307 }
308 else if (isEdge) // 5.9 % IE + Edge
309 {
310 history.go(-1);
311 }
312 else if (isOpera) // 3.7 %
313 {
314 history.go(-1);
315 }
316 else if (isBlink)
317 {
318 history.go(-1);
319 }
320//alert( window.location.href );
321// history.go(-1);
322//window.location.href = '#Videos';
323//history.replaceState({}, "#Videos", "#Videos");
324//alert( window.location.href );
325 }
326
327 window.onload = function()
328 {
329 // Opera 8.0+
330 var isOpera = (!!window.opr && !!opr.addons) || !!window.opera || navigator.userAgent.indexOf(' OPR/') >= 0;
331
332 // Firefox 1.0+
333 var isFirefox = typeof InstallTrigger !== 'undefined';
334
335 // Safari 3.0+ "[object HTMLElementConstructor]"
336 var isSafari = /constructor/i.test(window.HTMLElement) || (function (p) { return p.toString() === "[object SafariRemoteNotification]"; })(!window['safari'] || safari.pushNotification);
337
338 // Internet Explorer 6-11
339 var isIE = /*@cc_on!@*/false || !!document.documentMode;
340
341 // Edge 20+
342 var isEdge = !isIE && !!window.StyleMedia;
343
344 // Chrome 1+
345 var isChrome = !!window.chrome && !!window.chrome.webstore;
346
347 // Blink engine detection
348 var isBlink = (isChrome || isOpera) && !!window.CSS;
349
350 var output = 'Detecting browsers by ducktyping :\n===========================\n';
351 output+='isChrome: '+isChrome+'\n'; // 57.8 % Market Share
352 output+='isSafari: '+isSafari+'\n'; // 14.0 %
353 output+='isFirefox: '+isFirefox+'\n'; // 6.0 %
354 output+='isIE: '+isIE+'\n';
355 output+='isEdge: '+isEdge+'\n'; // 5.9 % IE + Edge
356 output+='isOpera: '+isOpera+'\n'; // 3.7 %
357 output+='isBlink: '+isBlink+'\n';
358
359// alert(output);
360
361 if (isIE)
362 {
363// alert(output);
364 var pichover=document.getElementsByClassName("pic-hover");
365 pichover[0].style.left="107px";
366 pichover[1].style.left="24px";
367 pichover[2].style.left="23px";
368 pichover[3].style.left="21px";
369 }
370 }
371
372
373 </script>
374 </body>
375</html>
376
ANSWER
Answered 2021-Oct-11 at 16:04but this is bad
1<!DOCTYPE html>
2<html lang="en">
3 <head>
4 <meta charset="UTF-8">
5 <meta name="viewport" content="width=device-width, initial-scale=1">
6 <title>GATE Cyber Technology : Award Winning Innovation For Identity And Access Management</title>
7 <meta name="description" content="GATE Cyber Technology LLC. INTERCEPTION-RESISTANT AUTHENTICATION AND ENCRYPTION SYSTEM AND METHOD. Introducing a breakthrough digital security innovation : Graphic Access Tabular Entry [ GATE ], an interception-resistant authentication and encryption system and method. With the GATE system you are not afraid that you are watched when you enter passwords, and you are not afraid that the password will be intercepted, the GATE innovative method is designed to be peek-resistant and interception-resistant. The GATE system and method will offer you better digital security. Identity and Access Management (IAM)">
8 <meta name="keywords" content="GATE Cyber Technology LLC. INTERCEPTION-resistant AUTHENTICATION AND ENCRYPTION SYSTEM AND METHOD, Graphic Access Tabular Entry [ GATE ], GATE security, GATE authentication, GATE login, GATE user authentication, GATE password, GATE passcode, peek-resistant, online security, digital security, passwords, password protection, strong password, strong cybersecurity, strong user authentication, prevent password loss, prevent user credential loss, passcode, cyber security, pin, login, logon, digital access, online access, access control, online protection, digital protection, online defence, digital defence, message encryption, message decryption, signal encryption, signal decryption, overcome weakness of traditional password, the GATE system, award winning, better than fingerprinting, better than iris scanning, safer than keyfob, better than password manager, safer password entry, Identity and Access Management (IAM), GATE defeats wiretapping, keylogging, peeking, phishing and dictionary attack, no restrictions of traditional password's lowercase, uppercase, numbers and special characters requirements, easy to use">
9 <meta name="google-site-verification" content="cXY5hsdt7XCjR_k96nha7Hn5uW4fw_1u6mc2LWDyAQ0" />
10 <link rel="shortcut icon" href="favicon.ico">
11
12 <link media="all" type="text/css" rel="stylesheet" href="https://cdn.ahrefs.com/assets/css/bootstrap.min.css">
13 <link media="all" type="text/css" rel="stylesheet" href="https://fonts.googleapis.com/css?family=Lato:400,300,100italic,100,300italic,400italic,700,700italic,900,900italic">
14 <link media="screen" type="text/css" rel="stylesheet" href="https://cdn.ahrefs.com/assets/css/home-responsive.css?20180815-001">
15
16 <link media="all" type="text/css" rel="stylesheet" href="css/bootstrap.min.css">
17 <link media="all" type="text/css" rel="stylesheet" href="css/css.css">
18 <link media="screen" type="text/css" rel="stylesheet" href="css/home-responsive.css">
19
20 <meta property="og:image" content="GATE_1.PNG">
21
22 <style>
23 div.Intro
24 {
25 font-size: 100%;
26 text-align: left;
27 }
28
29 div.Table
30 {
31 font-size: 218%;
32 text-align: center;
33 }
34
35 a:hover { color:#ddeeff; }
36 a:visited { color:#E8E8E8 }
37
38 tr a{ font-size: 18px;color:#aabbcc; }
39 tr a:hover { color:#ddeeff; }
40
41 .pic-container-1{display:block; position:relative; }
42 .pic-container-1 .pic-box{display:block;}
43 .pic-container-1 .pic-box img{display:block;}
44 .pic-container-1 .pic-hover{position:absolute; top:0px; left:104px; display:none;}
45 .pic-container-1:hover .pic-hover{display:block;}
46
47 .pic-container-2{display:block; position:relative; }
48 .pic-container-2 .pic-box{display:block;}
49 .pic-container-2 .pic-box img{display:block;}
50 .pic-container-2 .pic-hover{position:absolute; top:0px; left:18px; display:none;}
51 .pic-container-2:hover .pic-hover{display:block;}
52
53 .pic-container-3{display:block; position:relative; }
54 .pic-container-3 .pic-box{display:block;}
55 .pic-container-3 .pic-box img{display:block;}
56 .pic-container-3 .pic-hover{position:absolute; top:0px; left:20px; display:none;}
57 .pic-container-3:hover .pic-hover{display:block;}
58
59 .pic-container-4{display:block; position:relative; }
60 .pic-container-4 .pic-box{display:block;}
61 .pic-container-4 .pic-box img{display:block;}
62 .pic-container-4 .pic-hover{position:absolute; top:0px; left:18px; display:none;}
63 .pic-container-4:hover .pic-hover{display:block;}
64
65 #GATE_Frame_1 { width: 78%; height: auto; }
66 #GATE_Frame_2 { width: 98%; height: auto; }
67
68 #Balance { width: 80%; height: auto; }
69
70 #Ted_Murphree_img { width: 36vw; height: auto; }
71 #Scott_Schober_img { width: 36vw; height: auto; }
72 #Cary_Pool_img { width: 36vw; height: auto; }
73 #Eduard_B_img { width: 36vw; height: auto; }
74 #Jonathan_Rosenoer_img { width: 36vw; height: auto; }
75
76 #Traditional_vs_GATE_1 { width: 96%; height: auto; }
77 #Traditional_vs_GATE_2 { width: 99.5%; height: auto; }
78
79 #modal
80 {
81 display: none;
82 position: fixed;
83 width: 100vw;
84 height: 100vh;
85 max-height: 100vh;
86 top: 0;
87 left: 0;
88 background: rgba(24, 24, 24, .6);
89 z-index: 999;
90 }
91 #modal .content
92 {
93 position: relative;
94 width: 55%;
95 height: 65vh;
96 margin: auto; /* allows horyzontal and vertical alignment as .content is in flex container */
97 }
98 #modal .content .yt-video
99 {
100 display: block;
101 width: 100%;
102 height: calc(100% - 45px);
103 }
104 #modal .content .title
105 {
106 box-sizing: border-box;
107 height: 45px;
108 line-height: 23px;
109 padding: 12px 4px;
110 margin: 0;
111 background: #007bff;
112 color: #fff;
113 text-align: center;
114 font-size: 26px;
115 max-width: 100%;
116 white-space: nowrap;
117 overflow: hidden;
118 text-overflow: ellipsis;
119 }
120 #modal .close
121 {
122 position: absolute;
123 top: 0;
124 right: 0;
125 width: 45px;
126 height: 45px;
127 line-height: 36px;
128 text-align: center;
129 border: 0;
130 font-weight: bold;
131 font-size: 38px;
132 color: #fff;
133 background: #366;
134 cursor: pointer;
135 transition: background .2s;
136 }
137 #modal .content .close .a { font-size:38px;color: #ffffff; }
138 #modal .close:hover, #modal .close:active { background: #ff0000; }
139 #modal.is-visible { display: flex; }
140
141 html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li,
142 fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, article, aside, canvas, details, embed, figure, figcaption, footer, header, hgroup, menu, nav, output, ruby, section, summary, time, mark, audio, video
143 {
144 margin: 0;
145 padding: 0;
146 border: 0;
147 font-size: 100%;
148 font: inherit;
149 vertical-align: middle;
150 }
151
152 /* HTML5 display-role reset for older browsers */
153 article, aside, details, figcaption, figure, footer, header, hgroup, menu, nav, section { display: block; }
154 body { line-height: 1; }
155 // ol, ul { list-style: none; }
156 blockquote, q { quotes: none; }
157 blockquote:before, blockquote:after,
158 q:before, q:after
159 {
160 content: '';
161 content: none;
162 }
163 table
164 {
165 border-collapse: collapse;
166 border-spacing: 0;
167 }
168 </style>
169 </head>
170
171 <body class="page__guest ahrefs page-home">
172 <div id="localizejs">
173 <div class="content">
174 <a id="Awards"></a>
175 <div class="datas">
176 <div class="container center">
177 <Table Cellpadding=6>
178 <Tr>
179 <Td Align=Center><Br>
180 <Font Color=white><A Href=http://bestech.ittn.com.cn/#/projectlist2021 target=_blank>GATE has been selected</A> to the <A Href="2021_ZGC_Top_100_List_1.PNG" target=_blank>top 100</A>,<Br> among more than 2800 technologies collected<Br> from all over the world at 2021 ZGC<Br><A Href=http://bestech.ittn.com.cn/#/home target=_blank>International Technology Trade Conference</A>.</Font>
181 </Td>
182 </Tr>
183 </Table>
184 </div>
185 </div>
186
187 <!-- the modal div that will open when an anchor link is clicked to show the related video in an iframe. -->
188
189 <div id="modal">
190 <div class="content">
191 <div class="close"><a onclick = "return close_iFrame();">&times;</a></div>
192 <h4 class="title">.</h4>
193 <iframe class="yt-video" allowfullscreen></iframe>
194 </div>
195 </div>
196
197 <script>
198 var modal = document.getElementById('modal'),
199 closeBtn = modal.querySelector('close'),
200 ytVideo = modal.querySelector('.content .yt-video'),
201 title = modal.querySelector('.content .title'),
202 anchors = document.querySelectorAll('a[data-target="modal"]'),
203 l = anchors.length;
204
205 for (var i = 0; i < l; i++)
206 {
207 anchors[i].addEventListener("click", function (e)
208 {
209 e.preventDefault();
210 title.textContent = this.dataset.videoTitle || 'No title';
211 ytVideo.src = this.href;
212 modal.classList.toggle('is-visible');
213 modal.focus();
214 });
215 }
216
217 modal.addEventListener("keydown", function (e)
218 {
219 if (e.keyCode == 27)
220 {
221 title.textContent = '';
222 ytVideo.src = '';
223 this.classList.toggle('is-visible');
224 }
225 });
226 </script>
227
228 <script type="text/javascript">
229
230 function close_iFrame()
231 {
232 var modal = document.getElementById('modal'),
233 ytVideo = modal.querySelector('.content .yt-video');
234
235 ytVideo.src = '';
236 modal.classList.toggle('is-visible');
237
238 // Opera 8.0+
239 var isOpera = (!!window.opr && !!opr.addons) || !!window.opera || navigator.userAgent.indexOf(' OPR/') >= 0;
240
241 // Firefox 1.0+
242 var isFirefox = typeof InstallTrigger !== 'undefined';
243
244 // Safari 3.0+ "[object HTMLElementConstructor]"
245 var isSafari = /constructor/i.test(window.HTMLElement) || (function (p) { return p.toString() === "[object SafariRemoteNotification]"; })(!window['safari'] || safari.pushNotification);
246
247 // Internet Explorer 6-11
248 var isIE = /*@cc_on!@*/false || !!document.documentMode;
249
250 // Edge 20+
251 var isEdge = !isIE && !!window.StyleMedia;
252
253 // Chrome 1+
254 var isChrome = !!window.chrome && !!window.chrome.webstore;
255
256 // Blink engine detection
257 var isBlink = (isChrome || isOpera) && !!window.CSS;
258
259 var output = 'Detecting browsers by ducktyping :\n===========================\n';
260 output+='isChrome: '+isChrome+'\n'; // 57.8 % Market Share
261 output+='isSafari: '+isSafari+'\n'; // 14.0 %
262 output+='isFirefox: '+isFirefox+'\n'; // 6.0 %
263 output+='isIE: '+isIE+'\n';
264 output+='isEdge: '+isEdge+'\n'; // 5.9 % IE + Edge
265 output+='isOpera: '+isOpera+'\n'; // 3.7 %
266 output+='isBlink: '+isBlink+'\n';
267
268// alert(output+'[ history.length = '+history.length+' ]');
269
270 if (isChrome) // 57.8 % [ Will work correctly only after 3rd+ time of going to the #Videos section ]
271 {
272/*
273[1] No code : after 1st play, "back" plays sound
274 after 2nd play, "back" also plays sound, remembers history
275 after play 2 videos, 1 "back" plays last vodeo, 2 "back" does nothing, 3 "back" plays 2nd last video
276 Seems to remember [ empty ] + [ video ]
277
278Memory pattern : Top [video_1] [ ] [video_2] ?
279*/
280
281 if (!sessionStorage.getItem("runOnce")) // 1st time : Remembers 1st video // 2nd time : back to top after closing iFrame // 3rd time+ : works correctly
282 {
283 // alert('runOnce');
284 window.history.replaceState({},"Videos","#Videos");
285// window.location.href='#Videos';
286// history.go(0);
287 sessionStorage.setItem("runOnce",true);
288 }
289 else
290 {
291 window.history.replaceState({},"Videos","#Videos");
292 history.go(-1);
293 }
294
295 }
296 else if (isSafari) // 14.0
297 {
298
299 }
300 else if (isFirefox) // 6.0 % [ Works correctly ]
301 {
302 history.go(-1);
303 }
304 else if (isIE)
305 {
306 window.history.replaceState({},"Videos","#Videos");
307 }
308 else if (isEdge) // 5.9 % IE + Edge
309 {
310 history.go(-1);
311 }
312 else if (isOpera) // 3.7 %
313 {
314 history.go(-1);
315 }
316 else if (isBlink)
317 {
318 history.go(-1);
319 }
320//alert( window.location.href );
321// history.go(-1);
322//window.location.href = '#Videos';
323//history.replaceState({}, "#Videos", "#Videos");
324//alert( window.location.href );
325 }
326
327 window.onload = function()
328 {
329 // Opera 8.0+
330 var isOpera = (!!window.opr && !!opr.addons) || !!window.opera || navigator.userAgent.indexOf(' OPR/') >= 0;
331
332 // Firefox 1.0+
333 var isFirefox = typeof InstallTrigger !== 'undefined';
334
335 // Safari 3.0+ "[object HTMLElementConstructor]"
336 var isSafari = /constructor/i.test(window.HTMLElement) || (function (p) { return p.toString() === "[object SafariRemoteNotification]"; })(!window['safari'] || safari.pushNotification);
337
338 // Internet Explorer 6-11
339 var isIE = /*@cc_on!@*/false || !!document.documentMode;
340
341 // Edge 20+
342 var isEdge = !isIE && !!window.StyleMedia;
343
344 // Chrome 1+
345 var isChrome = !!window.chrome && !!window.chrome.webstore;
346
347 // Blink engine detection
348 var isBlink = (isChrome || isOpera) && !!window.CSS;
349
350 var output = 'Detecting browsers by ducktyping :\n===========================\n';
351 output+='isChrome: '+isChrome+'\n'; // 57.8 % Market Share
352 output+='isSafari: '+isSafari+'\n'; // 14.0 %
353 output+='isFirefox: '+isFirefox+'\n'; // 6.0 %
354 output+='isIE: '+isIE+'\n';
355 output+='isEdge: '+isEdge+'\n'; // 5.9 % IE + Edge
356 output+='isOpera: '+isOpera+'\n'; // 3.7 %
357 output+='isBlink: '+isBlink+'\n';
358
359// alert(output);
360
361 if (isIE)
362 {
363// alert(output);
364 var pichover=document.getElementsByClassName("pic-hover");
365 pichover[0].style.left="107px";
366 pichover[1].style.left="24px";
367 pichover[2].style.left="23px";
368 pichover[3].style.left="21px";
369 }
370 }
371
372
373 </script>
374 </body>
375</html>
376a[href] {color: blue !important, text-decoration: underline !important}
377
QUESTION
Bootstrap overflow width when writting an article with many paragraphs
Asked 2021-Oct-12 at 11:38OVERVIEW
I am building a website to showcase my blockchain and cybersecurity projects I'd worked with. So far, I'd implemented two pages of my website using bootstrap v5.1.3. I'm no front-end developer, but still I wanted to build something of my own.
Currently, I'm writing the description of one of my projects, and later I will add some images into it.
PROBLEM
I'm currently facing the issue of an horizontal bar showing if I write too much paragraphs in the page and I don't know how to make it disappear.
I'm trying to solve this issue so that all the paragraphs are responsive and only appear within the viewport width of a screen, and don't overflow creating the horizontal bar.
Check the image below for a better explanation.
IMAGE
QUESTION
How can I solve this only using bootstrap v5.1.3?
WEBPAGE CODE
1/* test.css */
2
3html, body {
4 margin: 0;
5 padding: 0;
6 width: 100%;
7 height: 100%;
8 font-family: Hack, monospace !important;
9 background-color: #0f0f0f;
10}
11
12body {
13 display: flex!important;
14}
15
16.wrapper {
17 background-color: #0f0f0f;
18}
19
20.text-center.h1 {
21 color: #F4364C !important;
22 font-size: 4vw !important;
23}
24
25.h6 {
26 color: #F4364C !important;
27 font-size: 1.25vw !important;
28 opacity: 0.5 !important;
29}
1/* test.css */
2
3html, body {
4 margin: 0;
5 padding: 0;
6 width: 100%;
7 height: 100%;
8 font-family: Hack, monospace !important;
9 background-color: #0f0f0f;
10}
11
12body {
13 display: flex!important;
14}
15
16.wrapper {
17 background-color: #0f0f0f;
18}
19
20.text-center.h1 {
21 color: #F4364C !important;
22 font-size: 4vw !important;
23}
24
25.h6 {
26 color: #F4364C !important;
27 font-size: 1.25vw !important;
28 opacity: 0.5 !important;
29}<!DOCTYPE html>
30<html lang="en">
31<head>
32 <meta charset="UTF-8">
33 <meta name="viewport" content="width=device-width, initial-scale=1">
34 <meta http-equiv="X-UA-Compatible" content="ie=edge">
35 <meta name="author" content="Joshua">
36
37 <title>Project 1 | XXX XXX</title>
38
39 <!-- hack fonts -->
40 <link href='https://cdn.jsdelivr.net/npm/hack-font@3.3.0/build/web/hack.css' rel='stylesheet' >
41
42 <!-- stylesheet -->
43 <link href='test.css' rel='stylesheet'>
44
45 <!-- bootstrap-5.1.3 -->
46 <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-1BmE4kWBq78iYhFldvKuhfTAU6auU8tT94WrHftjDbrCEXSU1oBoqyl2QvZ6jIW3" crossorigin="anonymous">
47
48 <!-- bootstrap-5.1.3 script bundle with popper -->
49 <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js" integrity="sha384-ka7Sk0Gln4gmtz2MlQnikT1wXgYsOg+OMhuP+IlRH9sENBO0LRn5q+8nbTov4+1p" crossorigin="anonymous"></script>
50
51</head>
52<body>
53 <div class="d-flex flex-column min-vh-100 min-vw-100 wrapper">
54
55 <!-- Project Title -->
56 <div class="container-fluid my-auto">
57 <p class="text-center h1"><span>Astronomy Star Registry</span></p>
58 </div>
59
60 <div class="container-fluid my-auto">
61 <p class="h6">
62 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque non nibh sit amet eros ullamcorper tincidunt. Curabitur sed imperdiet erat. In facilisis urna magna, ut mollis est posuere nec. Duis non neque vel libero dignissim dictum. Nullam scelerisque, sem porttitor dignissim blandit, enim felis condimentum enim, non cursus felis ex vel felis. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Nunc lectus odio, finibus nec porta non, varius pulvinar eros. Aenean eget vulputate lorem, sed mollis ipsum. In mollis iaculis sem, quis sodales metus sodales quis. In nec efficitur libero, quis pharetra turpis. Nunc a felis vestibulum lacus feugiat euismod. Integer id diam a arcu dictum imperdiet nec at libero. Aliquam lorem dui, faucibus non posuere vel, venenatis vel augue. Aenean lorem ex, eleifend ut dictum a, semper nec risus. Nunc varius erat tortor, vitae sagittis sem vehicula non.
63 </p>
64 <p class="h6">
65 Aliquam erat volutpat. Pellentesque sagittis, nisi ac tempor lobortis, lacus neque posuere libero, vel maximus nibh dui non massa. Nulla at lectus vestibulum, tristique nisi at, vulputate ex. Vestibulum sit amet pharetra tortor. Sed felis nulla, finibus ut ipsum eget, pretium mollis quam. Proin urna metus, cursus non turpis vel, elementum blandit nulla. Nulla eu accumsan ipsum. Donec sodales tellus a turpis dapibus tincidunt. Praesent luctus vestibulum magna, ac feugiat metus ullamcorper eu. Mauris non elementum nunc, sed sagittis risus. Cras sed elit laoreet, faucibus ligula quis, tempus quam. Donec posuere eget eros eu pulvinar. Vestibulum justo augue, feugiat elementum erat sit amet, tempor porttitor urna. Integer malesuada mauris et ultricies sollicitudin.
66 </p>
67 <p class="h6">
68 Cras hendrerit quis velit vel molestie. Proin ut velit metus. Sed semper et neque non rhoncus. Cras semper dui eget eros tempus, sed malesuada nisi dignissim. Aliquam ante dolor, ultricies quis varius at, pellentesque nec urna. Mauris sit amet commodo nulla, ac malesuada lacus. Proin bibendum quis quam vel volutpat. Ut pulvinar tincidunt vehicula.
69 </p>
70 <p class="h6">
71 Phasellus sit amet vulputate neque, id mattis velit. Vivamus porttitor tellus ac est dictum lacinia. Aenean tincidunt tempus fringilla. Sed aliquam nibh ut turpis condimentum, eget malesuada nibh iaculis. Ut tincidunt at nisl vel tristique. Nam quam nunc, lacinia eget augue dictum, aliquet aliquam lectus. Aenean eleifend quam nec est tempus imperdiet.
72 </p>
73 <p class="h6">
74 In nec leo at tellus bibendum blandit sodales at neque. Sed vel dolor in tellus lobortis imperdiet venenatis in lectus. Ut ex ex, bibendum in fringilla et, vestibulum id mauris. Nam eu lorem nisi. Donec vitae fermentum est. Quisque sodales imperdiet felis, viverra consectetur enim egestas a. Duis leo orci, malesuada nec dolor ac, efficitur consequat dui. Aliquam lobortis commodo viverra.
75 </p>
76 <p class="h6">
77 Nunc vulputate ultricies metus in molestie. Mauris ultrices metus feugiat augue mollis ultrices. Quisque ac mattis enim, sed suscipit orci. Fusce eu enim tempor, bibendum ligula quis, faucibus ligula. Aenean nec iaculis tortor, eu suscipit sem. Proin in elit at lectus euismod lacinia. Quisque ac auctor felis, eget ultrices orci. Curabitur accumsan, massa dictum pellentesque feugiat, mauris velit tincidunt mi, ut porta nisl nibh id nisi. Nam non facilisis arcu. Aliquam eros est, elementum a leo sit amet, porttitor euismod ligula. Maecenas tellus massa, molestie ut ultrices at, finibus ac mauris.
78 </p>
79 <p class="h6">
80 Aliquam congue faucibus libero. Aenean sed suscipit ipsum. Aenean varius eleifend metus in pulvinar. Ut dapibus condimentum vehicula. Sed dictum arcu nulla, eget semper turpis fermentum at. Nam congue pretium rutrum. Mauris sit amet mauris sagittis, pulvinar nunc et, posuere diam.
81 </p>
82 <p class="h6">
83 Nunc tortor elit, interdum eget lacinia sed, tincidunt quis ex. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus et nunc eu nibh pulvinar eleifend. Pellentesque porttitor feugiat placerat. In at felis est. Etiam scelerisque velit pharetra, blandit erat non, mattis ex. Aenean congue tortor nec diam maximus, eget auctor nisi accumsan. Sed at dignissim sem, eu placerat tellus. Curabitur lobortis dui nec lorem gravida pellentesque. Duis sagittis, tortor sit amet dapibus finibus, nisi lacus maximus tellus, nec convallis orci velit non libero. Cras sodales, sem in sodales tincidunt, nisi magna facilisis felis, imperdiet elementum erat turpis a dui. Duis non felis pretium, viverra dui eget, condimentum erat. Nunc lobortis convallis felis, ac scelerisque sem cursus a. Ut in gravida tortor. Cras porttitor sapien sem. Aenean cursus erat et libero scelerisque placerat.
84 </p>
85 <p class="h6">
86 Suspendisse potenti. Sed varius ipsum sem, imperdiet vehicula orci pharetra sit amet. Nulla facilisi. Integer faucibus sed tellus quis cursus. Donec lacinia varius ipsum, vitae bibendum justo pharetra vel. Nunc facilisis a dolor sit amet maximus. In nec leo iaculis, pharetra tortor ac, imperdiet arcu. Duis non rhoncus enim, vehicula tincidunt orci. Ut in augue at ante sagittis efficitur ac eget tortor. Nunc eget felis ac quam tempor volutpat. Phasellus id volutpat tortor. Sed cursus eros at interdum convallis. Morbi ullamcorper felis eget massa porttitor pulvinar sed vitae purus. Ut iaculis ante eget ipsum congue, ut efficitur diam condimentum. Etiam lobortis dolor est, sed fringilla diam placerat eget.
87 </p>
88 <p class="h6">
89 Vivamus consectetur, nisi in dapibus vehicula, ipsum eros congue nunc, a posuere nisl mauris vitae sem. Sed interdum placerat commodo. Quisque id molestie sapien. Vestibulum vitae tempus ligula. Morbi eu molestie risus. Vivamus ac sapien tincidunt, hendrerit nibh ut, sagittis lacus. Maecenas pellentesque elementum libero non pretium. Proin in sodales massa. Praesent eu blandit libero. Interdum et malesuada fames ac ante ipsum primis in faucibus.
90 </p>
91 </div>
92
93 </div>
94</body>
95</html>
ANSWER
Answered 2021-Oct-12 at 11:38Please remove min-vw-100 class from your div
1/* test.css */
2
3html, body {
4 margin: 0;
5 padding: 0;
6 width: 100%;
7 height: 100%;
8 font-family: Hack, monospace !important;
9 background-color: #0f0f0f;
10}
11
12body {
13 display: flex!important;
14}
15
16.wrapper {
17 background-color: #0f0f0f;
18}
19
20.text-center.h1 {
21 color: #F4364C !important;
22 font-size: 4vw !important;
23}
24
25.h6 {
26 color: #F4364C !important;
27 font-size: 1.25vw !important;
28 opacity: 0.5 !important;
29}<!DOCTYPE html>
30<html lang="en">
31<head>
32 <meta charset="UTF-8">
33 <meta name="viewport" content="width=device-width, initial-scale=1">
34 <meta http-equiv="X-UA-Compatible" content="ie=edge">
35 <meta name="author" content="Joshua">
36
37 <title>Project 1 | XXX XXX</title>
38
39 <!-- hack fonts -->
40 <link href='https://cdn.jsdelivr.net/npm/hack-font@3.3.0/build/web/hack.css' rel='stylesheet' >
41
42 <!-- stylesheet -->
43 <link href='test.css' rel='stylesheet'>
44
45 <!-- bootstrap-5.1.3 -->
46 <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-1BmE4kWBq78iYhFldvKuhfTAU6auU8tT94WrHftjDbrCEXSU1oBoqyl2QvZ6jIW3" crossorigin="anonymous">
47
48 <!-- bootstrap-5.1.3 script bundle with popper -->
49 <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js" integrity="sha384-ka7Sk0Gln4gmtz2MlQnikT1wXgYsOg+OMhuP+IlRH9sENBO0LRn5q+8nbTov4+1p" crossorigin="anonymous"></script>
50
51</head>
52<body>
53 <div class="d-flex flex-column min-vh-100 min-vw-100 wrapper">
54
55 <!-- Project Title -->
56 <div class="container-fluid my-auto">
57 <p class="text-center h1"><span>Astronomy Star Registry</span></p>
58 </div>
59
60 <div class="container-fluid my-auto">
61 <p class="h6">
62 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque non nibh sit amet eros ullamcorper tincidunt. Curabitur sed imperdiet erat. In facilisis urna magna, ut mollis est posuere nec. Duis non neque vel libero dignissim dictum. Nullam scelerisque, sem porttitor dignissim blandit, enim felis condimentum enim, non cursus felis ex vel felis. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Nunc lectus odio, finibus nec porta non, varius pulvinar eros. Aenean eget vulputate lorem, sed mollis ipsum. In mollis iaculis sem, quis sodales metus sodales quis. In nec efficitur libero, quis pharetra turpis. Nunc a felis vestibulum lacus feugiat euismod. Integer id diam a arcu dictum imperdiet nec at libero. Aliquam lorem dui, faucibus non posuere vel, venenatis vel augue. Aenean lorem ex, eleifend ut dictum a, semper nec risus. Nunc varius erat tortor, vitae sagittis sem vehicula non.
63 </p>
64 <p class="h6">
65 Aliquam erat volutpat. Pellentesque sagittis, nisi ac tempor lobortis, lacus neque posuere libero, vel maximus nibh dui non massa. Nulla at lectus vestibulum, tristique nisi at, vulputate ex. Vestibulum sit amet pharetra tortor. Sed felis nulla, finibus ut ipsum eget, pretium mollis quam. Proin urna metus, cursus non turpis vel, elementum blandit nulla. Nulla eu accumsan ipsum. Donec sodales tellus a turpis dapibus tincidunt. Praesent luctus vestibulum magna, ac feugiat metus ullamcorper eu. Mauris non elementum nunc, sed sagittis risus. Cras sed elit laoreet, faucibus ligula quis, tempus quam. Donec posuere eget eros eu pulvinar. Vestibulum justo augue, feugiat elementum erat sit amet, tempor porttitor urna. Integer malesuada mauris et ultricies sollicitudin.
66 </p>
67 <p class="h6">
68 Cras hendrerit quis velit vel molestie. Proin ut velit metus. Sed semper et neque non rhoncus. Cras semper dui eget eros tempus, sed malesuada nisi dignissim. Aliquam ante dolor, ultricies quis varius at, pellentesque nec urna. Mauris sit amet commodo nulla, ac malesuada lacus. Proin bibendum quis quam vel volutpat. Ut pulvinar tincidunt vehicula.
69 </p>
70 <p class="h6">
71 Phasellus sit amet vulputate neque, id mattis velit. Vivamus porttitor tellus ac est dictum lacinia. Aenean tincidunt tempus fringilla. Sed aliquam nibh ut turpis condimentum, eget malesuada nibh iaculis. Ut tincidunt at nisl vel tristique. Nam quam nunc, lacinia eget augue dictum, aliquet aliquam lectus. Aenean eleifend quam nec est tempus imperdiet.
72 </p>
73 <p class="h6">
74 In nec leo at tellus bibendum blandit sodales at neque. Sed vel dolor in tellus lobortis imperdiet venenatis in lectus. Ut ex ex, bibendum in fringilla et, vestibulum id mauris. Nam eu lorem nisi. Donec vitae fermentum est. Quisque sodales imperdiet felis, viverra consectetur enim egestas a. Duis leo orci, malesuada nec dolor ac, efficitur consequat dui. Aliquam lobortis commodo viverra.
75 </p>
76 <p class="h6">
77 Nunc vulputate ultricies metus in molestie. Mauris ultrices metus feugiat augue mollis ultrices. Quisque ac mattis enim, sed suscipit orci. Fusce eu enim tempor, bibendum ligula quis, faucibus ligula. Aenean nec iaculis tortor, eu suscipit sem. Proin in elit at lectus euismod lacinia. Quisque ac auctor felis, eget ultrices orci. Curabitur accumsan, massa dictum pellentesque feugiat, mauris velit tincidunt mi, ut porta nisl nibh id nisi. Nam non facilisis arcu. Aliquam eros est, elementum a leo sit amet, porttitor euismod ligula. Maecenas tellus massa, molestie ut ultrices at, finibus ac mauris.
78 </p>
79 <p class="h6">
80 Aliquam congue faucibus libero. Aenean sed suscipit ipsum. Aenean varius eleifend metus in pulvinar. Ut dapibus condimentum vehicula. Sed dictum arcu nulla, eget semper turpis fermentum at. Nam congue pretium rutrum. Mauris sit amet mauris sagittis, pulvinar nunc et, posuere diam.
81 </p>
82 <p class="h6">
83 Nunc tortor elit, interdum eget lacinia sed, tincidunt quis ex. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus et nunc eu nibh pulvinar eleifend. Pellentesque porttitor feugiat placerat. In at felis est. Etiam scelerisque velit pharetra, blandit erat non, mattis ex. Aenean congue tortor nec diam maximus, eget auctor nisi accumsan. Sed at dignissim sem, eu placerat tellus. Curabitur lobortis dui nec lorem gravida pellentesque. Duis sagittis, tortor sit amet dapibus finibus, nisi lacus maximus tellus, nec convallis orci velit non libero. Cras sodales, sem in sodales tincidunt, nisi magna facilisis felis, imperdiet elementum erat turpis a dui. Duis non felis pretium, viverra dui eget, condimentum erat. Nunc lobortis convallis felis, ac scelerisque sem cursus a. Ut in gravida tortor. Cras porttitor sapien sem. Aenean cursus erat et libero scelerisque placerat.
84 </p>
85 <p class="h6">
86 Suspendisse potenti. Sed varius ipsum sem, imperdiet vehicula orci pharetra sit amet. Nulla facilisi. Integer faucibus sed tellus quis cursus. Donec lacinia varius ipsum, vitae bibendum justo pharetra vel. Nunc facilisis a dolor sit amet maximus. In nec leo iaculis, pharetra tortor ac, imperdiet arcu. Duis non rhoncus enim, vehicula tincidunt orci. Ut in augue at ante sagittis efficitur ac eget tortor. Nunc eget felis ac quam tempor volutpat. Phasellus id volutpat tortor. Sed cursus eros at interdum convallis. Morbi ullamcorper felis eget massa porttitor pulvinar sed vitae purus. Ut iaculis ante eget ipsum congue, ut efficitur diam condimentum. Etiam lobortis dolor est, sed fringilla diam placerat eget.
87 </p>
88 <p class="h6">
89 Vivamus consectetur, nisi in dapibus vehicula, ipsum eros congue nunc, a posuere nisl mauris vitae sem. Sed interdum placerat commodo. Quisque id molestie sapien. Vestibulum vitae tempus ligula. Morbi eu molestie risus. Vivamus ac sapien tincidunt, hendrerit nibh ut, sagittis lacus. Maecenas pellentesque elementum libero non pretium. Proin in sodales massa. Praesent eu blandit libero. Interdum et malesuada fames ac ante ipsum primis in faucibus.
90 </p>
91 </div>
92
93 </div>
94</body>
95</html><div class="d-flex flex-column min-vh-100 wrapper">
96
QUESTION
Faster way than nested for loops for custom conditions on multiple columns in two DataFrames
Asked 2021-Jul-16 at 20:35I have two Dataframes as below:
1df1
2+------------+-------------------+-------------+
3| Name | Topic | Date |
4+------------+-------------------+-------------+
5| ABC | Data Science | 2020-01-01 |
6| DEF | Machine Learning | 2021-03-06 |
7| ABC | Cybersecurity | 2021-01-05 |
8| BHL | Cloud Computing | 2020-11-09 |
9+------------+-------------------+-------------+
10
11It has around 50,000 rows
12
The second dataframe has several columns, but I am interested in only following three:
1df1
2+------------+-------------------+-------------+
3| Name | Topic | Date |
4+------------+-------------------+-------------+
5| ABC | Data Science | 2020-01-01 |
6| DEF | Machine Learning | 2021-03-06 |
7| ABC | Cybersecurity | 2021-01-05 |
8| BHL | Cloud Computing | 2020-11-09 |
9+------------+-------------------+-------------+
10
11It has around 50,000 rows
12df2
13+------------------------------------+------+-------------+
14| Description | Name | Created Date|
15+------------------------------------+------+-------------+
16| This is good Data Science project. | XYZ | 2021-06-04 |
17| Cybersecurity is important. | BBB | 2021-02-03 |
18| I am Data Science Professional | ABC | 2021-02-08 |
19| Machine Learning is strategic. | DEF | 2021-03-01 |
20+------------------------------------+------+-------------+
21
22It has around 300,000 rows.
23
I want to find all the rows from df2 where:
For each unique (Name, Topic and Date) in df1, find rows in df2 where 'Name' matches and 'Created Date' is within the next six months of 'Date' from df1, as well as the 'Topic' is in 'Description'.
I have used two for loops to iterate over each dataframes' rows as shown below. But, the problem is that since there are large number of rows and iterating over each row this way is not the best method I feel. Can you please suggest any other way to do it faster and efficiently. I also want to attach 'Topic', 'Date' from df1 to each matching row of df2(some kind of merge, but not sure how).
My code is as follows:
1df1
2+------------+-------------------+-------------+
3| Name | Topic | Date |
4+------------+-------------------+-------------+
5| ABC | Data Science | 2020-01-01 |
6| DEF | Machine Learning | 2021-03-06 |
7| ABC | Cybersecurity | 2021-01-05 |
8| BHL | Cloud Computing | 2020-11-09 |
9+------------+-------------------+-------------+
10
11It has around 50,000 rows
12df2
13+------------------------------------+------+-------------+
14| Description | Name | Created Date|
15+------------------------------------+------+-------------+
16| This is good Data Science project. | XYZ | 2021-06-04 |
17| Cybersecurity is important. | BBB | 2021-02-03 |
18| I am Data Science Professional | ABC | 2021-02-08 |
19| Machine Learning is strategic. | DEF | 2021-03-01 |
20+------------------------------------+------+-------------+
21
22It has around 300,000 rows.
23import pandas as pd
24from dateutil.relativedelta import relativedelta
25
26df1 = df1.drop_duplicates() # Drop duplicate entries
27
28df_final = pd.DataFrame()
29
30for index1, row1 in df1.iterrows():
31 future_date = row1['Date'] + relativedelta(months=6)
32 for index2, row2 in df2.iterrows():
33 if ((row1['Name'] == row2['Name']) and (row1['Date] < row2['Created Date'] < future_date)
34 and (row1['Topic'] in row2['Description'])):
35 df_final = df_final.append(row2)
36 else:
37 continue
38
39
ANSWER
Answered 2021-Jul-16 at 20:35try those steps:
1df1
2+------------+-------------------+-------------+
3| Name | Topic | Date |
4+------------+-------------------+-------------+
5| ABC | Data Science | 2020-01-01 |
6| DEF | Machine Learning | 2021-03-06 |
7| ABC | Cybersecurity | 2021-01-05 |
8| BHL | Cloud Computing | 2020-11-09 |
9+------------+-------------------+-------------+
10
11It has around 50,000 rows
12df2
13+------------------------------------+------+-------------+
14| Description | Name | Created Date|
15+------------------------------------+------+-------------+
16| This is good Data Science project. | XYZ | 2021-06-04 |
17| Cybersecurity is important. | BBB | 2021-02-03 |
18| I am Data Science Professional | ABC | 2021-02-08 |
19| Machine Learning is strategic. | DEF | 2021-03-01 |
20+------------------------------------+------+-------------+
21
22It has around 300,000 rows.
23import pandas as pd
24from dateutil.relativedelta import relativedelta
25
26df1 = df1.drop_duplicates() # Drop duplicate entries
27
28df_final = pd.DataFrame()
29
30for index1, row1 in df1.iterrows():
31 future_date = row1['Date'] + relativedelta(months=6)
32 for index2, row2 in df2.iterrows():
33 if ((row1['Name'] == row2['Name']) and (row1['Date] < row2['Created Date'] < future_date)
34 and (row1['Topic'] in row2['Description'])):
35 df_final = df_final.append(row2)
36 else:
37 continue
38
39# drop dup rows in df1
40df1 = df1.drop_duplicates()
41# merge df2 with df1 on name
42df2 = df2.merge(df1, how='inner', left_on='Name', right_on='Name')
43future_date = df2['Date'] + relativedelta(months=6)
44# now select based on requirement
45df2 = df2[(df2['Date'] > df2['Created Date']) & (df['Date'] < future_date)]
46df2 = df2[df2.apply(lambda x: x['Topic'] in x['Description'], axis=1)]
47
QUESTION
Find a hash function to malfunction insertion sort
Asked 2021-May-25 at 02:24Below is the original pseudocode for Insertion Sort:
1function INSERTIONSORT(A[0..n−1])
2for i←1 to n−1 do
3 j←i−1
4 while j≥0 and A[j+1]<A[j] do
5 SWAP(A[j+1],A[j])
6 j←j−1
7
A company is using Insertion Sort inside one of their products. You are a cybersecurity expert that was hired by this company to assess any security flaws with their code. After a few tries, you managed to attack their Insertion Sort code and modify it in the following way:
1function INSERTIONSORT(A[0..n−1])
2for i←1 to n−1 do
3 j←i−1
4 while j≥0 and A[j+1]<A[j] do
5 SWAP(A[j+1],A[j])
6 j←j−1
7function INSERTIONSORT(A[0..n−1])
8 for i←1 to n−1 do
9 j←i−1
10 while j≥0 and HASH(A,j+1) < HASH(A,j) do
11 SWAP(A[j+1],A[j])
12 j←j−1
13
In other words, instead of indexing the array as A[j] and A[j+1]inside the “while” condition, you now have a hash function that takes the array and a index as the arguments and return an integer. Your job is to implement specific hash functions that will cause the algorithm to malfunction in different ways.
- a) Implement a hash function that causes Insertion Sort to keep the original array unchanged. Explain why your solution works.
- b) Implement a hash function that causes Insertion Sort to always run in the worst case complexity, even if the resulting array does not end up getting sorted. Explain why your solution works.
- c) Implement a hash function that causes Insertion Sort to sort the array in reverse. Explain why your solution works.
I think (a) and (b) is hash(A,j)=j and hash(A,j)=-j, but have no idea if that is correct and have no clue to c.
ANSWER
Answered 2021-May-25 at 02:24**Part a) Original array unchanged
1function INSERTIONSORT(A[0..n−1])
2for i←1 to n−1 do
3 j←i−1
4 while j≥0 and A[j+1]<A[j] do
5 SWAP(A[j+1],A[j])
6 j←j−1
7function INSERTIONSORT(A[0..n−1])
8 for i←1 to n−1 do
9 j←i−1
10 while j≥0 and HASH(A,j+1) < HASH(A,j) do
11 SWAP(A[j+1],A[j])
12 j←j−1
13#include <stdio.h>
14
15int hash(int arr[], int i) {
16 return i;
17}
18
19void insertionSort(int arr[], int n) {
20 int i, j, temp;
21
22 for (i = 1 ; i <= n - 1; i++)
23 {
24 j = i-1;
25 while ( j >= 0 && hash(arr, j+1) < hash(arr, j))
26 {
27 temp = arr[j];
28 arr[j] = arr[j+1];
29 arr[j+1] = temp;
30 j--;
31 }
32 }
33}
34
35int main()
36{
37 int i;
38 int arr[] = {5, 6, 7, 3, 2 , 9, 4};
39 int n = sizeof(arr)/sizeof(arr[0]);
40 insertionSort(arr, n);
41 printf("Original array unchanged:\n");
42 for (i = 0; i <= n - 1; i++)
43 {
44 printf("%d\n", arr[i]);
45 }
46 return 0;
47}
Part b) Worst Case insertion sort
1function INSERTIONSORT(A[0..n−1])
2for i←1 to n−1 do
3 j←i−1
4 while j≥0 and A[j+1]<A[j] do
5 SWAP(A[j+1],A[j])
6 j←j−1
7function INSERTIONSORT(A[0..n−1])
8 for i←1 to n−1 do
9 j←i−1
10 while j≥0 and HASH(A,j+1) < HASH(A,j) do
11 SWAP(A[j+1],A[j])
12 j←j−1
13#include <stdio.h>
14
15int hash(int arr[], int i) {
16 return i;
17}
18
19void insertionSort(int arr[], int n) {
20 int i, j, temp;
21
22 for (i = 1 ; i <= n - 1; i++)
23 {
24 j = i-1;
25 while ( j >= 0 && hash(arr, j+1) < hash(arr, j))
26 {
27 temp = arr[j];
28 arr[j] = arr[j+1];
29 arr[j+1] = temp;
30 j--;
31 }
32 }
33}
34
35int main()
36{
37 int i;
38 int arr[] = {5, 6, 7, 3, 2 , 9, 4};
39 int n = sizeof(arr)/sizeof(arr[0]);
40 insertionSort(arr, n);
41 printf("Original array unchanged:\n");
42 for (i = 0; i <= n - 1; i++)
43 {
44 printf("%d\n", arr[i]);
45 }
46 return 0;
47}#include <stdio.h>
48
49int hash(int arr[], int i) {
50 return -i;
51}
52
53void insertionSort(int arr[], int n) {
54 int i, j, temp;
55
56 for (i = 1 ; i <= n - 1; i++)
57 {
58 j = i-1;
59 while ( j >= 0 && hash(arr, j+1) < hash(arr, j))
60 {
61 temp = arr[j];
62 arr[j] = arr[j+1];
63 arr[j+1] = temp;
64 j--;
65 }
66 }
67}
68
69int main()
70{
71 int i;
72 int arr[] = {5, 6, 7, 3, 2 , 9, 4};
73 int n = sizeof(arr)/sizeof(arr[0]);
74 insertionSort(arr, n);
75 printf("In worst case(number of swaps maximum)\n");
76 for (i = 0; i <= n - 1; i++)
77 {
78 printf("%d\n", arr[i]);
79 }
80 return 0;
81}
Part c) Sorted in reverse order.**
1function INSERTIONSORT(A[0..n−1])
2for i←1 to n−1 do
3 j←i−1
4 while j≥0 and A[j+1]<A[j] do
5 SWAP(A[j+1],A[j])
6 j←j−1
7function INSERTIONSORT(A[0..n−1])
8 for i←1 to n−1 do
9 j←i−1
10 while j≥0 and HASH(A,j+1) < HASH(A,j) do
11 SWAP(A[j+1],A[j])
12 j←j−1
13#include <stdio.h>
14
15int hash(int arr[], int i) {
16 return i;
17}
18
19void insertionSort(int arr[], int n) {
20 int i, j, temp;
21
22 for (i = 1 ; i <= n - 1; i++)
23 {
24 j = i-1;
25 while ( j >= 0 && hash(arr, j+1) < hash(arr, j))
26 {
27 temp = arr[j];
28 arr[j] = arr[j+1];
29 arr[j+1] = temp;
30 j--;
31 }
32 }
33}
34
35int main()
36{
37 int i;
38 int arr[] = {5, 6, 7, 3, 2 , 9, 4};
39 int n = sizeof(arr)/sizeof(arr[0]);
40 insertionSort(arr, n);
41 printf("Original array unchanged:\n");
42 for (i = 0; i <= n - 1; i++)
43 {
44 printf("%d\n", arr[i]);
45 }
46 return 0;
47}#include <stdio.h>
48
49int hash(int arr[], int i) {
50 return -i;
51}
52
53void insertionSort(int arr[], int n) {
54 int i, j, temp;
55
56 for (i = 1 ; i <= n - 1; i++)
57 {
58 j = i-1;
59 while ( j >= 0 && hash(arr, j+1) < hash(arr, j))
60 {
61 temp = arr[j];
62 arr[j] = arr[j+1];
63 arr[j+1] = temp;
64 j--;
65 }
66 }
67}
68
69int main()
70{
71 int i;
72 int arr[] = {5, 6, 7, 3, 2 , 9, 4};
73 int n = sizeof(arr)/sizeof(arr[0]);
74 insertionSort(arr, n);
75 printf("In worst case(number of swaps maximum)\n");
76 for (i = 0; i <= n - 1; i++)
77 {
78 printf("%d\n", arr[i]);
79 }
80 return 0;
81}#include <stdio.h>
82
83int hash(int arr[], int i) {
84 return -arr[i];
85}
86
87void insertionSort(int arr[], int n) {
88 int i, j, temp;
89
90 for (i = 1 ; i <= n - 1; i++)
91 {
92 j = i-1;
93 while ( j >= 0 && hash(arr, j+1) < hash(arr, j))
94 {
95 temp = arr[j];
96 arr[j] = arr[j+1];
97 arr[j+1] = temp;
98 j--;
99 }
100 }
101}
102
103int main()
104{
105 int i;
106 int arr[] = {5, 6, 7, 3, 2 , 9, 4};
107 int n = sizeof(arr)/sizeof(arr[0]);
108 insertionSort(arr, n);
109 printf("Sorted in reverse order:\n");
110 for (i = 0; i <= n - 1; i++)
111 {
112 printf("%d\n", arr[i]);
113 }
114 return 0;
115}
QUESTION
component wont render when is useEffect() is ran once
Asked 2021-May-17 at 11:34So I have the following code, where I'm fetching data to be rendered in my component. However, if the useEffect is set to run once, it wont render the data inside the component, and having it constantly running is not sustainable.
1import React, { useState, useEffect } from "react";
2import Chart from "react-google-charts";
3
4
5const Bottom5 = ({ company }) => {
6 const [quiz, setQuiz] = useState('');
7 const [dataPoints, setDatapoints] = useState([]);
8
9 useEffect(() => {
10 var resultData = [];
11 fetch(`http://localhost:3001/company/dashboard/bottom5/${company}`)
12 .then(function(response) {
13 return response.json();
14 })
15 .then(function(data) {
16 for (var i = 0; i < data.length; i++) {
17 resultData.push({
18 label: data[i].name,
19 y: data[i].sumCorrect
20 });
21 }
22 setDatapoints(resultData)
23 });
24 },[])
25
26
27 return (
28 <Chart style={{display:"inline-block"}}
29 width={'500px'}
30 height={'300px'}
31 chartType="ColumnChart"
32 loader={<div>Loading Chart</div>}
33 data={[
34 ['Names', 'Result'],
35 ...dataPoints.map(d => [d.label, d.y])
36 ]}
37 options={{
38 title: 'CyberSecurity Bottom 5',
39 chartArea: { width: '50%' },
40 hAxis: {
41 title: 'Employees',
42 minValue: 0,
43 },
44 vAxis: {
45 title: 'Total Correct',
46 },
47 }}
48 // For tests
49 rootProps={{ 'data-testid': '1' }}
50 />
51 )
52}
53
54export default Bottom5;
ANSWER
Answered 2021-Apr-30 at 16:19There is an issue with update the array using hooks.
1import React, { useState, useEffect } from "react";
2import Chart from "react-google-charts";
3
4
5const Bottom5 = ({ company }) => {
6 const [quiz, setQuiz] = useState('');
7 const [dataPoints, setDatapoints] = useState([]);
8
9 useEffect(() => {
10 var resultData = [];
11 fetch(`http://localhost:3001/company/dashboard/bottom5/${company}`)
12 .then(function(response) {
13 return response.json();
14 })
15 .then(function(data) {
16 for (var i = 0; i < data.length; i++) {
17 resultData.push({
18 label: data[i].name,
19 y: data[i].sumCorrect
20 });
21 }
22 setDatapoints(resultData)
23 });
24 },[])
25
26
27 return (
28 <Chart style={{display:"inline-block"}}
29 width={'500px'}
30 height={'300px'}
31 chartType="ColumnChart"
32 loader={<div>Loading Chart</div>}
33 data={[
34 ['Names', 'Result'],
35 ...dataPoints.map(d => [d.label, d.y])
36 ]}
37 options={{
38 title: 'CyberSecurity Bottom 5',
39 chartArea: { width: '50%' },
40 hAxis: {
41 title: 'Employees',
42 minValue: 0,
43 },
44 vAxis: {
45 title: 'Total Correct',
46 },
47 }}
48 // For tests
49 rootProps={{ 'data-testid': '1' }}
50 />
51 )
52}
53
54export default Bottom5;setDatapoints(resultData) // reference is same - not updating
55setDatapoints([...resultData]) // do this << --
56
57
The reference of an array does not change, so hooks doesn't update itself.
Community Discussions contain sources that include Stack Exchange Network
Tutorials and Learning Resources in Cybersecurity
Tutorials and Learning Resources are not available at this moment for Cybersecurity