Popular New Releases in Networking
frp
v0.41.0
shadowsocks-windows
4.4.1.0
requests
v2.27.1
react-router
v2ray-core
v4.31.0
Popular Libraries in Networking
by fatedier go
55464
Apache-2.0
A fast reverse proxy to help you expose a local server behind a NAT or firewall to the internet.
by shadowsocks csharp
53806
GPL-3.0
A C# port of shadowsocks
by psf python
47177
Apache-2.0
A simple, yet elegant, HTTP library.
by remix-run typescript
46674
MIT
Declarative routing for React
by ReactTraining javascript
43698
MIT
Declarative routing for React
by square kotlin
41993
Apache-2.0
Square’s meticulous HTTP client for the JVM, Android, and GraalVM.
by v2ray go
39387
MIT
A platform for building proxies to bypass network restrictions.
by caddyserver go
39075
Apache-2.0
Fast, multi-platform web server with automatic HTTPS
by Alamofire swift
37489
MIT
Elegant HTTP Networking in Swift
Trending New libraries in Networking
by XTLS go
8099
MPL-2.0
Xray, Penetrates Everything. Also the best v2ray-core, with XTLS support. Fully compatible configuration.
by OpenIMSDK go
7570
Apache-2.0
OpenIM: Instant messaging open source project based on go built by IM technology experts. Backend in Go.(由IM技术专家打造的基于 Go 实现的即时通讯(IM)项目,从服务端到客户端SDK开源即时通讯(IM)整体解决方案,可以轻松替代第三方IM云服务,打造具备聊天、社交功能的app。)
by sogou c++
7431
Apache-2.0
Parallel Computing and Asynchronous Networking Engine ⭐️⭐️⭐️
by tailscale go
7241
NOASSERTION
The easiest, most secure way to use WireGuard and 2FA.
by OpenIntelWireless c
5394
NOASSERTION
Intel Wi-Fi Drivers for macOS
by microsoft csharp
5209
MIT
A toolkit for developing high-performance HTTP reverse proxy applications.
by tokio-rs rust
4413
MIT
Ergonomic and modular web framework built with Tokio, Tower, and Hyper
by p4gefau1t go
4281
GPL-3.0
Go实现的Trojan代理,支持多路复用/路由功能/CDN中转/Shadowsocks混淆插件,多平台,无依赖。A Trojan proxy written in Go. An unidentifiable mechanism that helps you bypass GFW. https://p4gefau1t.github.io/trojan-go/
by screego go
4189
GPL-3.0
screen sharing for developers https://screego.net/
Top Authors in Networking
1
64 Libraries
5208
2
46 Libraries
7050
3
45 Libraries
25552
4
33 Libraries
18854
5
32 Libraries
8941
6
29 Libraries
194
7
28 Libraries
22582
8
27 Libraries
3270
9
24 Libraries
384
10
23 Libraries
2136
1
64 Libraries
5208
2
46 Libraries
7050
3
45 Libraries
25552
4
33 Libraries
18854
5
32 Libraries
8941
6
29 Libraries
194
7
28 Libraries
22582
8
27 Libraries
3270
9
24 Libraries
384
10
23 Libraries
2136
Trending Kits in Networking
Enable security functions like password encryption, digital signatures, secure random number generation, message authentication, Two Factor Authentication (2FA), and more with Java encryption libraries.
Encryption, decryption, and key generation are the three most crucial aspects of Java cryptography. Key-based data security can be enabled in two ways, using symmetric or asymmetric encryption algorithms, considering how secure the code needs to be. Moreover, developers can easily enable security functions directly into the code with the set of APIs available in the Java Cryptography Architecture (JCA). The JCA is the core of java encryption and decryption, hashing, secure random, and various other java cryptographic functions.
Check out the below list to find more trending Java encryption libraries for your applications:
Developers generally use java crypto libraries like Google Tink that help both encrypt and decrypt the data through simple and secure APIs.
Bouncy Castle is a popular Java cryptographic library that is used by developers to perform basic operations, such as encryption and digital signature.
Frequently used with Bouncy Castle, the Jasypt library helps enable basic encryption functions even if you don’t have deep cryptography know-how.
Here are some of the famous C# Networking Libraries. C# Networking Libraries use cases include Building a Chat Application, Building an Online Multiplayer Game, building a Networked File Sharing Application, building a Networked Database Application, and Building a Networked Streaming Media Application.
C# networking libraries are the collection of classes and functions used to develop C# programming language network applications. These libraries provide functionality such as networking protocols, data transfer, encryption, and data storage. Examples of C# networking libraries include the .NET Framework Network Classes, System.Net, and OpenNETCF.
Let us have a look at these libraries in detail below.
mRemoteNG
- Supports many protocols such as RDP, VNC, SSH, Telnet, HTTP/HTTPS, and ICA/HDX.
- Rich plugin system to extend the functionality of the application.
- Powerful scripting engine to automate common tasks.
websocket-sharp
- Supports the latest websocket protocol specifications.
- Supports compression of websocket frames using the Per-Message Deflate extension.
- Actively maintained and regularly updated with new features and bug fixes.
protobuf-net
- Serialization and Deserialization.
- Compact Binary Format.
- Supports Multiple Platforms.
DotNetty
- Event-driven API.
- Protocol Agnostic.
- Built-in Pipeline.
NETworkManager
- Built-in packet inspection tool that can be used to troubleshoot and diagnose network problems.
- Powerful tools for developers, such as a network traffic simulator.
- Allows users to configure, monitor, and control their network traffic quickly.
Mirror
- High-performance, extensible, and lightweight.
- Designed to be platform-agnostic.
- Supports Unity’s built-in Networking.
surging
- High-performance TCP/IP networking stack.
- Pluggable architecture that allows developers to easily customize and extend the library to meet their specific needs.
- Provides a range of built-in security features.
BruteShark
- Supports many protocols such as HTTP, FTP, SMTP, DNS, and SSL/TLS.
- Integrated packet capture engine to capture network traffic and save it in various formats.
- Monitor multiple networks simultaneously and can detect MITM attacks.
LiteNetLib
- Supports both client-server and peer-to-peer architectures.
- Provides reliable UDP messaging with the help of its own packet fragmentation and reassembly mechanism.
- Supports automatic NAT punchthrough for connecting to peers behind a firewall or router.
MQTTnet
- Supports SSL/TLS encryption and authentication.
- Provides native support for Windows, Linux, and macOS platforms.
- Includes an integrated logging framework.
LOIC
- Allows the user to select from a variety of attack types.
- Includes a graphical user interface.
- Includes a feature called “Hive Mind”, which allows users to join a “hive” and send requests in unison with other users.
SteamKit
- Support for various languages, including C#, C++, and JavaScript.
- Highly extensible and can be used to create custom network protocols for games.
- Various functions are designed to facilitate communication between applications and the Steam network.
NetCoreServer
- Flexible API.
- Robust Security.
- Cross-Platform Compatibility.
DotNetOpenAuth
- Provides strong cryptography algorithms and secure communications protocols.
- Written in C#, it is easy to port to other platforms.
- Allows developers to extend the library for their specific use cases.
lidgren-network-gen3
- Binary Serialization.
- Peer-to-peer Networking.
- Reliability.
BeetleX
- Built-in support for Cross-Origin Resource Sharing (CORS).
- Deep integration with the .Net Core platform.
- Provides an asynchronous, non-blocking programming model with no callbacks and no threads.
BedrockFramework
- Provides a distributed object model that allows for objects to be shared across different instances without creating extra copies.
- Provides a unique set of tools for debugging and monitoring network traffic and performance.
- Allows for a more robust and reliable system than other libraries written in other languages.
EvilFOCA
- Spoofing allows users to hide their IP address when making network requests.
- The port scanning feature allows users to scan for open ports on a network.
- The mapping feature allows users to map a network and identify various devices, services, and connections.
Here are the best open-source PHP routing libraries for your applications. You can use these tools to determine the location of website visitors based on their IP address or other location data.
These libraries provide a range of functionalities such as geocoding, reverse geocoding, distance calculations, and mapping. They allow developers to determine website visitors' country, city, region, and latitude/longitude based on their IP address. Google Maps Geolocation API is one of the most widely used PHP geolocation libraries. It provides a simple and reliable way to determine the location of website visitors using data from Google Maps. It allows developers to get the latitude and longitude of a location and its estimated accuracy. These libraries enable developers to provide a more personalized user experience by showing relevant content based on the location of website visitors. They also help you to create custom maps and visualizations based on geospatial data and enable location-based advertising and marketing strategies.
PHP Geolocation Libraries are essential tools for web developers who want to create location-based web applications. We have handpicked the top and trending open-source PHP routing libraries for your next application development project:
GeoIP2 PHP:
- Used in Web Services, REST applications, etc.
- Provides an easy-to-use API for working with MaxMind's GeoIP2 and GeoLite2 databases.
- Allows developers to determine the location of website visitors based on their IP address.
Google Maps Geolocation API:
- Used to determine the location of website visitors using data from Google Maps.
- Allows developers to get the latitude and longitude of a location.
- Also provides the estimated accuracy of the location.
Leaflet:
- Used to handle dynamic map configurations working in a PHP context.
- It is lightweight and easy to use for building mobile-friendly interactive maps.
- Supports a wide range of map providers.
GeoPHP:
- Used in Geo, Map applications, etc.
- It’s a native PHP library for geometry operations and provides basic geospatial functionality.
- Features include point-in-polygon testing, distance calculations, and geometry simplification.
Geocoder:
- Used in Utilities, Command Line Interface, Laravel applications, etc.
- Provides geocoding and reverse geocoding services.
- Supports data from various providers such as Google Maps, OpenStreetMap, and Bing Maps.
IP2Location:
- Used in Networking, TCP applications, etc.
- Provides fast lookup and geolocation services based on IP address data.
- Includes a database of IP address ranges and location data for various countries and regions.
SmartyStreets:
- Used in Web Services, REST applications, etc.
- Provides address validation and geocoding services.
- Uses data from various providers such as Google Maps, OpenStreetMap, and Bing Maps.
Geotools:
- Used in Manufacturing, Utilities, Aerospace, Defense, Geo, Map applications, etc.
- Supports accept almost kinds of WGS84 geographic coordinates.
- Built on top Geocoder and React libraries.
Location:
- Used in Networking, TCP applications, etc.
- Helps retrieve a user's location from their IP address using various services.
- Works with PHP >= 7.3 and Laravel >= 5.0.
Judge Yvonne Gonzalez Rogers ordered that iOS apps must be allowed to support non Apple payment options in the Epic v. Apple case. In this case, Apple also scored a partial victory as the judge stopped short of calling it a monopoly. The judge also ordered Epic Games to pay Apple 30% of its revenue through the direct payment system. Epic is fighting a similar lawsuit against Google. Countries like South Korea have passed laws requiring Apple and Google to offer alternative payment systems to their users in the country. While the jury is still out on the Epic v. Apple case, it brings out two aspects. Is what is often referred to by developers as the "Apple Tax" of 30% indeed justified? For this reason, Epic launched the Epic Games Store to demonstrate that they could operate at a lower revenue cut of 12%. The second aspect is platform and payments interoperability. When platform interoperability becomes mandated or a global best practice, developers should be ready to bring in payment gateways of their choice. The kandi kit for App Store Payment Alternatives showcases the popular open source payment gateways such as Omnipay, Active Merchant, and CI Merchant and libraries available to connect with leading payment platforms such as Stripe, Braintree, and Razorpay.
Omnipay
Core libraries and samples from Omnipay, a framework agnostic, multi-gateway payment processing library for PHP.
Active Merchant
Libraries on Active Merchant, a simple payment abstraction library extracted from Shopify.
CI Merchant
Though no longer actively supported use the library to build and support your own gateway. If you are not looking to build but to use, then leverage other frameworks.
Braintree
Libraries for Braintree integration.
Razorpay
Libraries for Razorpay integration.
Stripe
Libraries for Stripe integration.
Build smart application to collect and scrap data from a variety of online sources using these open-source data scrapping libraries.
In today’s world, we are surrounded loads of data of different types and from diverse sources. And every business organisation wants to make the best use of this data. The ability to gather and utilise this data is a must-have skill for every data scientist.
Web scraping is the process of extracting structured and unstructured data from the web with the help of programs and exporting into a useful format. You can efficiently use the Python language to build application to harvest online data through these specific Python libraries.
The following list covers the top and trending libraries for web data scrapping. By clicking on each you can check out the overview, code examples, best applications and cases for each of them, and a lot more. Scroll through:
Working with HTTP to request a web page
Complete web scraping framework
Parsing HTML, XML
Buttons are essentially the drivers of online interaction as we use them to login into our emails, add products to our shopping carts, download photos and basically confirm any and all actions. But more than that, every button click is a successful conclusion of every front-end web developer’s hard work. That’s why it is crucial to spend time creating functional buttons that both look beautiful and provide visual cues to the user. JavaScript offers a ton of great button libraries for you to choose your essential UI components from. Here are some of the JavaScript libraries for buttons including Semantic-UI - UI component framework based around useful principles; Buttons - A CSS button library built using Sass and Compass; Ladda - Buttons with built-in loading indicators. The following is a comprehensive list of the best open-source JavaScript Button libraries in 2022
A router connects your home computers to a local area network (LAN). It then routes packets intended for the Internet (email, web, etc.) through your router to your ISP's (Internet Service Provider) actual connection to the big bad Internet. This project came to life from a personal interest in hardware embedded design and software design in Linux with PHP. The main aim is to build a highly secure Wi-Fi Router out of a Raspberry Pi, easily configurable via a dynamic UI designed in HTML/PHP.
Status Indicators
Security
Authenticator and Router
Analyzer
WI-FI Routing
The Trump Media and Technology Group is being investigated by Software Freedom Conservancy for non-compliance with copyleft licensing. The issue stems from President Donald Trump’s new social network, Truth Social appearing to be forked from Mastodon. While Mastodon is open source and available to use, it is licensed under Affero General Public License (or AGPLv3) that requires The Trump Media and Technology Group to share its source code with all who used the site. If they fail to do this within 30 days, their rights and permissions in the software are automatically and permanently terminated, making their platform inoperable. So if you are just a developer or a former POTUS, copyleft provisions apply to all. If you want to use open source, use kandi.openweaver.com. All libraries are matched to SPDX license definitions and highlighted clearly for appropriate use. Privacy, regulation, bias, and many other issues are weighing down popular social networks. Users are seeking self-hosted social platforms that can be governed by themselves. But do use them with the appropriate licenses. The kandi kit on Opensource Social Platforms lists popular opensource libraries that you can use to host private social channels.
Bluetooth and Wi-Fi are the essential communication medium in the internet world. Bluetooth and Wi-Fi are used to provide wireless communication. Bluetooth allows us to share data, files, voice, music, video, and a lot of information between paired devices. Bluetooth and Wi-Fi provide tracking facility also. If we lose the gadgets and want to track that gadget or track a particular person, we can track them with the help of Bluetooth and Wi-Fi. Bluetooth and Wi-Fi tracking play a vital role in tracking the lost gadgets or a person. Following libraries can help you to perform Bluetooth and Wi-Fi tracking
Build enhanced server side scripting for various usecases in Ruby for your application.Get ratings, code snippets & documentation for each library.
Python Network Programming libraries offer easy-to-use APIs for socket programming. It will allow developers to create and manage network sockets. It will help communication between two endpoints. These libraries support protocols like UDP, HTTPS, FTP, TCP, HTTP, SSH, and others. The Master Python Networking library depends on how much you understand about machine learning libraries and deep learning libraries.
Network programmability libraries like asyncio offer support for asynchronous programming. Network automation allows developers to write scalable, high-performance network applications. It can handle many connections. These libraries can help serialize data, making sending data over the network easier. The popular libraries offer network analysis tools. It will allow developers to analyze and manipulate network packets. These libraries are like Requests. It offers support for web scraping to extract data from web pages.
The machine learning library offers various features to help Python developers. The Python programming language helps add network functionality to their applications. There are two levels of network service access available in different programming languages, like high-level and low-level access. Low-level access allows programmers to use and access the basic socket support for the OS using the Python packages.
Here are the 11 best Python Network Programming Libraries handpicked to help developers:
opensnitch:
- It allows users to control and monitor network connections.
- It is on their system by setting rules to allow or deny access.
- It is built using Python and the Qt framework.
- It uses the iptables firewall to control network traffic.
- It offers real-time information about network activity, like ports, protocols, and IP addresses.
scapy:
- It allows users to interact with network packets at a low level.
- It offers a powerful API to create, send, receive, and dissect network packets.
- It can be used for network analysis, packet manipulation, and packet sniffing.
trio:
- It helps write concurrent and asynchronous network applications.
- It offers an alternative to the standard libraries.
- It will focus on safety, usability, and simplicity.
- It is a simple, safe design and comprehensive features make it the best for various projects.
crossbar:
- It helps build real-time, scalable, and distributed systems using the WebSocket protocol.
- It offers a powerful framework for building microservices, distributed systems, and event-driven applications.
- It allows developers to create real-time communication applications.
- It can help push data to clients in real-time using WebSockets.
napalm:
- It is a Network Automation and Programmability Abstraction Layer.
- It offers a vendor-agnostic API to manage and configure network devices.
- It supports various network devices like Juniper, Cisco, and Arista.
- It has comprehensive features making it best for network programming and automation.
- It would be a great assignment to use Network Programmability in such conditions.
gns3-gui:
- It allows users to access complex networks.
- It uses real network devices, software-based routers & switches, and virtual machines.
- It offers various tools and features to build and manage network topologies.
- It can be used for various network engineering tasks.
- It can help with testing, designing and troubleshooting network configurations.
pennylane:
- It is an open source Python library for quantum computing, information, and machine learning.
- It offers a powerful framework for developing and experimenting with quantum algorithms.
- It helps integrate seamlessly with libraries.
- It is easy to incorporate quantum computing into machine learning workflows.
geneva:
- It runs exclusively on one side of the network connection and doesn’t require a proxy.
- It defeats censorship by modifying the network traffic.
- It can help with modifying packets and injecting traffic.
- It is composed of four basic packet-level actions for representing censorship evasion strategies.
evillimiter:
- It is a Python library and command-line tool to limit the bandwidth of devices on a network.
- It can be used for monitoring network traffic and testing network performance.
- Also, it can help with controlling bandwidth usage on shared networks.
- Its simple interface and flexible device identification make it easy.
fopnp:
- It offers code snippets and examples which demonstrate different networking protocols and concepts.
- It will help include network protocols like HTTP, SSH, TCP/IP, and SMTP.
- It helps with TCP listener, TCP client connection, and TCP server connection.
- It covers various concepts like network security, network performance optimization, and socket programming.
hamms:
- It is a Python library for calculating the hamming distance between two strings.
- It can be used with strings of any length and with any symbol.
- It performs error checking to ensure that the two strings are of equal length.
FAQs:
What is Network Automation and how does it relate to Python Network Programming?
The process of automating the management, configuration, deployment, operations, and testing of virtual and physical devices within a network is known as Network Automation. Python network automation has many modules used to automate network tasks like SSH connections to switches and routes.
What is a client in the context of network automation with Python?
A client is a computer program which uses Python modules and libraries to interact with network devices. We can perform tasks like operating network equipment, configuring, topologies, managing, services, and connectivity.
Is there any advantage to using more than one python library when doing advanced networking tasks such as SDN or cloud computing integration?
Yes, there can be advantages to using more than one Python library if you are performing advanced networking tasks. You can perform tasks like cloud computing integration and SDN. With the help of multiple libraries, you can leverage their strengths and build a more flexible solution.
What are the tips for choosing the right library when developing an application?
When incorporating third-party frameworks or libraries into your application, you should consider the following best practices:
- Creating and maintaining an inventory catalog of all the third-party libraries.
- Using a framework or library from trusted sources which are actively maintained and used by many applications.
- Proactively keep components and libraries updated.
- Reduces the attack surface by encapsulating the library and exposing only the needed behavior in your application.
Here are some of the famous JavaScript ChatGPT Libraries. Some JavaScript ChatGPT Libraries' use cases include Live Chat Support, Online Shopping, Educational Platforms, Healthcare, and Banking.
Javascript chatgpt libraries are collections of code that provide developers with tools for creating and deploying chatgpt applications. They are designed to simplify the development and deployment of a chatbot, making it easier for developers to create a conversational interface that can provide a robust user experience. These libraries offer a variety of components and features, such as natural language processing, dialog management, text-to-speech and speech-to-text, and AI-based decision-making.
Let us have a look at the libraries in detail.
pusher-js
- Provides real-time communication.
- Pusher-js will reconnect the user to the chat room if the connection is somehow lost.
- Supports multiple platforms, including JavaScript, iOS, Android, Ruby, and Python.
cometchat-pro-react-sample-app
- Supports group messaging, allowing users to create and join group conversations.
- Built-in moderation system that allows admins to control conversations and enforce rules.
- End-to-end encryption ensures that all conversations are private.
ChatKit
- Provides synchronized user data across multiple clients, allowing for a more seamless chat experience.
- Built-in support for native push notifications.
- Built-in support for bots, making it easier to automate conversations.
LiveChat
- Integrates with various external services, such as CRM tools, helpdesks, and more.
- Offers automation tools to help agents respond faster to customer queries.
- Provides analytics and reporting features to help agents understand their customer base.
SimpleWebRTC
- Designed to be highly scalable, allowing large numbers of users to connect at once.
- Provides video and audio support, making communicating with friends and family easier.
- Requires no servers and very little code, allowing for a quick and easy setup.
pubnub-api
- Provides several advanced features, including message history, presence detection, file streaming, and more.
- Provides global coverage for its APIs.
- Provides the ability to send and receive messages in real time.
RTCMultiConnection
- Offers a customizable UI, allowing developers to create a unique look and feel for their applications.
- Secure and scalable library with support for WebSockets, WebRTC, and third-party APIs.
- Supports data streaming, file sharing, text chat, voice and video conferencing, and screen sharing.
kuzzle
- Built to scale to millions of concurrent users and devices.
- Secure, distributed, and highly available data storage layer.
- Advanced search and analytics capabilities.
Here are some of the famous NodeJs VPN Libraries. Some of the use cases of NodeJs VPN Libraries include:
- Securely accessing a private network over the internet.
- Creating a virtual private network (VPN).
- Bypassing internet censorship.
- Securing data in transit.
Node.js VPN libraries enable developers to create applications that use a virtual private network (VPN). These libraries provide functions for connecting to a VPN server, establishing secure tunnels, encrypting and decrypting data, and managing the connection. They can be used to create applications such as secure file sharing, remote access, and encrypted communication.
Let us look at the libraries in detail below.
node-vpn
- Easy-to-use API that makes it simple to set up and manage a secure VPN connection.
- Highly secure, utilizing strong encryption algorithms and offering secure tunneling protocols.
- Supports multiple platforms, including Windows, Mac, Linux, iOS, and Android.
Strong-VPN
- Supports a wide range of protocols, including OpenVPN, IKEv2, and SSTP.
- Fast and reliable server connections.
- Compatible with most major operating systems, including Windows, macOS, iOS, Android, and Linux.
node-openvpn
- Highly secure, as it uses the OpenVPN protocol.
- Highly configurable, allowing users to customize the setup for their specific needs.
- Supports both IPv4 and IPv6 addressing.
fried-fame
- Easy to get started quickly compared to other VPN libraries.
- Designed with security in mind, using the latest encryption algorithms and techniques.
- An open-source project, so anyone can contribute and benefit from the development.
vpngate
- Offers an extra security layer for your data and connection.
- Offers longer connection times and faster speeds than other nodejs VPN libraries.
- Reliable, as it is regularly updated with the latest security protocols.
expressvpn
- Offers unrestricted access to streaming services, social media, and websites.
- Features a kill switch and other advanced features to protect your data.
- Offers a 30-day money-back guarantee.
algo
- Allows you to customize different VPN profiles for different devices or locations.
- Is designed to leverage strong encryption algorithms and secure authentication methods.
- Allows you to choose which ports and protocols are used for your VPN connection.
strongswan
- More secure than other nodejs vpn libraries.
- Tested and audited by independent experts, and is used by many organizations.
- Easy to set up and configure, and can be used on multiple operating systems and devices.
Here are some of the famous Python WebSocket Utilities Libraries. Some use cases of Python WebSocket Utilities Libraries include Real-time Chat and Messaging Applications, Online Gaming, IoT Applications, and Real-time Data Visualization and Dashboards.
Python WebSocket utilities libraries are collections of code that provide a set of utilities to help developers create and manage WebSocket connections in Python. These libraries typically provide methods to simplify WebSocket connection setup, message sending, message receiving and connection management. They can also provide additional features such as authentication and SSL/TLS support.
Let us have a look at some of these libraries in detail below.
tornado
- Can handle up to 10,000 simultaneous open connections, making it ideal for applications with high levels of concurrent users.
- Can handle multiple requests simultaneously without blocking requests.
- One of the few web frameworks that supports WebSocket connections.
gevent
- Based on greenlet and libevent, making it extremely fast, lightweight and efficient.
- Highly extensible and can be easily integrated with other Python libraries and frameworks.
- Provides a high level of concurrency, allowing multiple requests to be handled at the same time.
twisted
- Event-driven architecture makes it easy to build highly concurrent applications.
- Can be used to build distributed applications, which can be used to connect multiple machines over the network.
- Provides a low-level interface which makes it easier to work with websockets.
websockets
- Data can be sent and received quickly, allowing for real-time communication.
- Enable bidirectional communication between the client and the server.
- Use the secure websocket protocol (WSS) which encrypts all data sent over the connection.
websocket-client
- Built on top of the standard library's asyncio module, which allows for asynchronous communication with websockets.
- Supports secure websocket connections via TLS/SSL, as well as binary messages and fragmented messages.
- Supports custom headers and subprotocols, making it easy to communicate with specific services that require specific headers or subprotocols.
WebSocket-for-Python
- Supports multiple protocols such as WebSocket, HTTP, and TCP, allowing for more flexible usage.
- Has built-in security features such as authentication and encryption, allowing you to securely communicate with other applications.
- Is written in Python, making it easy to use and integrate with existing Python applications.
socketIO-client
- Supports multiple transports, including long polling, WebSockets and cross-browser support.
- Support for namespaces, allowing for multiple independent connections to the same server.
- Allows for subscribing to multiple events, allowing for a more efficient implementation of your application.
pywebsocket
- Supports both server-side and client-side websocket connections.
- Provides support for websocket extensions.
- Supports both connection-oriented and connectionless websockets, making it a versatile tool for developers.
Here are some famous Swift Webserver Libraries. Swift Webserver Libraries use cases include Building a custom web server, Creating a content delivery network (CDN), Hosting a web application, and Developing a mobile application backend.
Swift web server libraries are libraries that are designed to enable developers to create web applications using the Swift programming language. These libraries provide tools to help developers with server-side development tasks such as routing, templating, and data handling.
Let us look at these libraries in detial.
vapor
- Provides a type-safe, declarative framework for writing web applications.
- Built-in support for asynchronous programming.
- Offers an integrated authentication and authorization system.
Moya
- Supports advanced authentication methods, such as OAuth2, Basic Auth, Client Certificate Authentication and Bearer Tokens.
- Provides an easy way to mock network requests for testing and development.
- Allows for custom header, query, and request body encoding for each request.
Perfect
- Provides a scalable, high-performance web server that is optimized for Swift applications.
- Offers built-in support for TemplateKit, a powerful templating engine for producing dynamic web content.
- Has a robust set of development tools and libraries, including an integrated debugger and profiler.
Kitura
- Built with a modular architecture that supports pluggable components to customize the server’s functionality.
- Kitura-Stencil template engine allows you to write HTML templates in Swift, making it easier to create dynamic webpages.
- Supports cloud deployment, allowing you to easily deploy your applications to the cloud.
swifter
- Built-in security features such as TLS encryption and sandboxing.
- Designed to be easy to setup and get up and running quickly.
- Highly extensible and allows developers to customize the server to their own needs.
Zewo
- Built with an asynchronous, non-blocking I/O model.
- Optimized for both OS X and Linux and is fully compatible with Swift Package Manager.
- Built-in support for HTTP/2, TLS/SSL, and other security features.
blackfire
- Provides support for both synchronous and asynchronous requests.
- Offers a range of deployment options, including Docker and Kubernetes.
- Built-in support for the most popular web development frameworks, such as Laravel, Symfony, and Express.
Kitura-NIO
- Makes it easy to define routes and map them to specific functions.
- Uses non-blocking I/O operations.
- Provides native support for secure communication over TLS.
This kandi kit on Fediverse applications helps you build federated social applications like Twitter. LinkedIn, Good Reads, Instagram, Reddit, and many more alternatives based on the ActivityPub protocol. You can use these popular open source libraries, such as Mastodon, PeerTube, WriteFreely, Plume, Owncast, Pixelfed, Misskey, BookWyrm, and others, to build your applications across micro, macro blogging, writing, reviews, podcasts, link aggregators, and professional networks.
A federated social network is a type of social network comprising multiple different providers or platforms. Instead of being controlled by a single company or organization, it is decentralized and distributed across these different providers. It enables interoperability among multiple social networks in a transparent way. The focus is on data exchange, and different networks adopt one unified data architecture so that a robust, heterogeneous network-of-networks can emerge.
Federated social networks solve issues commonly found in traditional social networking platforms, such as lack of user control and limited diversity in services. By joining a federated social network, you can select from various profile providers or even host your own server. This allows for greater innovation and flexibility. Additionally, profiles on different servers can communicate with each other seamlessly.
A federated social network comprises multiple independent services that communicate with each other using standard protocols. This allows users to interact with friends on different social networks without joining the same one. In other words, users from different social websites can communicate with each other seamlessly.
ActivityPub is a decentralized social networking protocol based on the ActivityPump protocol from Pump.io. It offers a client/server API for managing content and a server-to-server API for delivering notifications and content between federated servers. ActivityPub is recognized as an official standard by the World Wide Web Consortium’s (W3C) Social Web Networking Group.
ActivityPub is a protocol that allows different social media platforms to communicate with each other. It does this by providing a standardized way for platforms to create, update and delete content and deliver notifications and content between servers. This means that users on one platform can interact with users on another platform that implements the ActivityPub protocol.
For example, Alice is on a social media platform called “SocialA” and Bob is on another platform called “SocialB”. Both SocialA and SocialB implement the ActivityPub protocol. This means that Alice can follow Bob’s account on SocialB from her account on SocialA. When Bob posts something on SocialB, Alice will see it in her feed on SocialA. Similarly, when Alice likes or comments on Bob’s post from her SocialA account, Bob will see the like or comment on his SocialB account.
This is possible because both platforms use the same standardized communication method through the ActivityPub protocol.
Here are some cool open source applications to build micro, and macro blogging, writing, reviews, podcasts, link aggregators, and professional networks.
Here are some of the famous Node JS HTTP Request Libraries. The use cases of these libraries include Server-side Web Apps, Mobile App Development, Data Processing, Data Analysis, and Network Monitoring.
Node.js HTTP Request Libraries are libraries that allow Node.js developers to make requests to web servers in order to retrieve data. This can be helpful when building web applications that need to pull in external data from other web servers. Examples of popular Node.js HTTP Request Libraries are Axios, Request, and Superagent.
Let us look at some of these famous libraries.
axios
- Automatically transforms all request and response data into JSON, making it easier to handle data.
- Supports interceptors which can be used to modify or transform requests or responses before they are handled by then or catch.
- Supports automatic token refresh.
fetch
- Supports cancelable requests and allows developers to abort requests at any point.
- Supports a wide range of HTTP methods beyond the standard GET and POST.
- Provides a Promise-based API, which simplifies the process of making asynchronous requests.
request
- Provides full access to all Node.js features, including streams, event emitters, and files.
- Is highly configurable and can be used to set timeouts, retry requests, and more.
- Has built-in support for gzip and deflate encoding.
superagent
- Allows easy setting headers, cookies, and other request parameters.
- Is highly configurable and supports many features that help developers quickly and easily request HTTP.
- Supports multipart encoding, allowing developers to send binary data easily.
nock
- Allows users to easily create custom request matchers, allowing users to create more powerful and precise requests.
- Users can record and replay their requests, making debugging and testing easy.
- Provides a sandbox for users to test their requests in isolation without making live calls or affecting the external environment.
node-fetch
- Supports global agent pooling and request pooling.
- Allows developers to use the same API for both server and client-side requests.
- Is designed to be lightweight and fast, making it ideal for applications that require rapid response times.
r2
- Provides support for the latest security protocols, such as TLS 1.3.
- Support for advanced HTTP methodologies such as HTTP/2 and WebSocket.
- Allows for the implementation of custom middleware.
needle
- Has built-in support for parsing JSON and can be used to create custom parsers for other formats.
- Is well-documented and provides a comprehensive set of APIs for making HTTP requests.
- Is optimized for low latency and high throughput, making it suitable for various applications.
unirest-nodejs
- Has built-in support for automatically following redirects.
- Allows to send both synchronous and asynchronous requests.
- Has built-in support for automatically compressing and decompressing requests and responses.
hyperquest
- Has built-in support for caching and retrying requests, making it easy to build resilient applications.
- Supports server-side data serialization and deserialization, allowing developers to transform data between client and server quickly.
- Is built on a minimalistic core, making it suitable for applications that need a small footprint.
Here are some of the famous Swift Socket Libraries. The use cases of these libraries include Developing Custom Network Applications, Developing Server-Side Applications, Developing Streaming Applications, Developing Peer-to-Peer Applications, and Developing Distributed Applications.
Swift Socket Libraries are libraries written in the Swift language that provide an interface for applications to communicate over a network. These libraries allow developers to create custom networking applications that are capable of sending and receiving data over the internet. They provide an easy-to-use interface for developers to create powerful applications that can interact with other computers, servers, and services.
Let us look at some of these famous libraries.
Staescream
- Supports WebSocket compression, which reduces the amount of data sent over the wire.
- Provides methods to read and write data to the socket connection easily.
- Supports both TCP and UDP connections.
swift-nio
- Provides a non-blocking IO model, allowing high scalability and performance.
- Supports many protocols, including HTTP, HTTPS, WebSockets, and more.
- Supports reactive programming, allowing developers to write asynchronous code more efficiently
SwiftSocket
- Supports IPv4 and IPv6, as well as TLS/SSL encryption.
- Provides a well-documented API, making it easy to integrate into your existing Swift projects.
- Supports multiple platforms including iOS, macOS, tvOS, and watchOS.
SwiftWebSocket
- Provides a fast and lightweight WebSocket client written in Swift.
- Provides real-time diagnostics and debugging features, including logging and message tracing.
- Provides full control over message headers and payloads.
BlueSocket
- Highly customizable, it allows developers to customize the features of their network protocols and sockets.
- Implements all the latest security protocols, such as TLS 1.3 and WebSockets, to protect users' data.
- Has built-in support for IPv6, allowing developers to create applications easily.
sockets
- Provides a flexible way to configure socket options.
- Provides secure communication between client and server with encryption and authentication.
- Provides high performance and low latency for data transfers.
JustLog
- Offers a unique feature called "context switching," which allows developers to switch between different networks while their application runs.
- Provides an efficient asynchronous architecture which makes it easy to manage multiple connections.
- Supports real-time data synchronization with the help of an efficient and reliable protocol.
Here are the best open-source PHP routing libraries for your applications. You can use these tools in PHP web development to manage and direct user requests to appropriate controllers or actions.
These libraries allow developers to easily define routes that map URL patterns to specific PHP functions or methods. This simplifies handling HTTP requests and enables creation of complex web applications with clean and organized code. They provide a powerful routing system with a simple and intuitive API. Allowing developers to define routes using various methods, including regular expressions, placeholders, and custom conditions. While supporting advanced features like route caching and generating URLs based on route names, these libraries enable developers to:
- Organize their code more effectively,
- Reduce the complexity of handling HTTP requests, and
- Provide a more intuitive user experience for website visitors.
Overall, PHP routing libraries are essential for web developers to create scalable and maintainable web applications. We have handpicked the top and trending open-source PHP routing libraries for your next application development project:
Symfony Routing:
- Used to define routes using a variety of methods.
- It’s a powerful routing system with a simple and intuitive API.
- Features include regular expressions, placeholders, and custom conditions.
Laravel Routing:
- Used in Networking, Router applications, etc.
- Provides a clean, expressive syntax for defining routes.
- Features include middleware, route groups, and named routes.
Slim Routing:
- Used for building RESTful APIs.
- It’s a lightweight and fast routing library.
- Features include route caching, route groups, and route middleware.
FastRoute:
- Used in Networking, Router applications, etc.
- It’s a high-performance routing library optimized for speed and flexibility.
- Supports multiple HTTP methods, placeholders, and regular expressions.
Phroute:
- Used in Web Services, REST applications, etc.
- It’s a simple and fast routing library with a straightforward API.
- Features include route groups, before and after filters, and route caching.
Flight Routing:
- Used for named routes, HTTP method support, URL parameters, and other features.
- It’s a micro-framework that includes a simple routing system.
- Offers a fast PHP router that can easily integrate with other routers.
Aura Router:
- Used due to its powerful and flexible web routing for PSR-7 requests.
- Supports multiple HTTP methods, placeholders, and named routes.
- Others features include route middleware, subdomain routing, and route caching.
AltoRouter:
- Used in Web Services, REST applications, etc.
- It’s a lightweight and fast routing library heavily inspired by klein.php.
- Supports regular expressions, named routes, and HTTP method filtering.
Klein Routing:
- Used in Networking, Router, Framework applications, etc.
- It’s a micro-framework with a fast & flexible router for PHP 5.3+.
- Offers a simple routing system with route caching, before and after filters, and URL parameters.
Frame:
- Used in Server, Application Framework, Framework applications, etc.
- It’s a super simple PHP framework that uses Klein for routing.
- Supports regular expressions and named routes.
Here are the best open-source JavaScript routing libraries for your applications. You can use these to organize code and simplify navigation by defining routes or URLs for different components of a website or application.
JavaScript routing libraries are essential for managing client-side routing in modern single-page applications. They simplify navigation, improve user experience, and help developers to organize and maintain their codebase. These libraries provide a declarative approach to routing, allowing you to define routes intuitively. Also, you get support for dynamic routing, nested routes, and route parameters, making JavaScript routing libraries a powerful tool for managing routing in React-based applications. Certain libraries also provide advanced features such as route guards. This allows developers to control access to certain routes based on user authentication or other criteria.
These libraries are tailored to the specific needs of their respective frameworks. We have handpicked the top and trending open-source JavaScript routing libraries for your next application development project:
React Router:
- Used in Networking, Router, React-based applications, etc.
- Offers a declarative approach to routing.
- Provides support for dynamic routing, nested routes, and route parameters.
Vue Router:
- Used for advanced features like route guards.
- Provides a comprehensive routing system.
- Support nested and dynamic routes, transition effects, and more.
Reach Router:
- Used in User Interface, Frontend Framework, React applications, etc.
- It’s a simple and lightweight routing library for React applications.
- Offers a declarative API for defining routes.
Director:
- Used in Networking, Router, Nodejs, Express.js applications, etc.
- Works with client-side and server-side JavaScript applications.
- Supports both the browser and node.js environments.
Navigo:
- Used in User Interface, Frontend Framework, React applications, etc.
- It’s a lightweight and easy-to-use routing library.
- Supports hash-based and HTML5 pushState routing.
Crossroads:
- Used to handle navigation in WebApp.
- Provides a flexible and powerful API for defining routes.
- Built with simple components and React Hooks.
Ember.js:
- Used in User Interface, Frontend Framework, Framework applications, etc.
- Helps reduce the time, effort, and resources for building web applications.
- Routing functionality includes rendering templates, loading data models, and handling actions.
Trending Discussions on Networking
Microk8s dashboard using nginx-ingress via http not working (Error: `no matches for kind "Ingress" in version "extensions/v1beta1"`)
Laravel Homestead - page stopped working ERR_ADDRESS_UNREACHABLE
Accessing PostgreSQL (on wsl2) from DBeaver (on Windows) fails: "Connection refused: connect"
Why URL re-writing is not working when I do not use slash at the end?
Standard compliant host to network endianess conversion
How to configure proxy in emulators in new versions of Android Studio?
Unable to log egress traffic HTTP requests with the istio-proxy
Dynamodb local web shell does not load
Cancelling an async/await Network Request
How to configure GKE Autopilot w/Envoy & gRPC-Web
QUESTION
Microk8s dashboard using nginx-ingress via http not working (Error: `no matches for kind "Ingress" in version "extensions/v1beta1"`)
Asked 2022-Apr-01 at 07:26I have microk8s v1.22.2 running on Ubuntu 20.04.3 LTS.
Output from /etc/hosts
:
1127.0.0.1 localhost
2127.0.1.1 main
3
Excerpt from microk8s status
:
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9
I checked for the running dashboard (kubectl get all --all-namespaces
):
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39
I want to expose the microk8s dashboard within my local network to access it through http://main/dashboard/
To do so, I did the following nano ingress.yaml
:
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56
Enabling the ingress-config through kubectl apply -f ingress.yaml
gave the following error:
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57
Help would be much appreciated, thanks!
Update: @harsh-manvar pointed out a mismatch in the config version. I have rewritten ingress.yaml to a very stripped down version:
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60 name: dashboard
61 namespace: kube-system
62spec:
63 rules:
64 - http:
65 paths:
66 - path: /dashboard
67 pathType: Prefix
68 backend:
69 service:
70 name: kubernetes-dashboard
71 port:
72 number: 443
73
Applying this works. Also, the ingress rule gets created.
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60 name: dashboard
61 namespace: kube-system
62spec:
63 rules:
64 - http:
65 paths:
66 - path: /dashboard
67 pathType: Prefix
68 backend:
69 service:
70 name: kubernetes-dashboard
71 port:
72 number: 443
73NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
74kube-system dashboard public * 127.0.0.1 80 11m
75
However, when I access the dashboard through http://<ip-of-kubernetes-master>/dashboard
, I get a 400
error.
Log from the ingress controller:
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60 name: dashboard
61 namespace: kube-system
62spec:
63 rules:
64 - http:
65 paths:
66 - path: /dashboard
67 pathType: Prefix
68 backend:
69 service:
70 name: kubernetes-dashboard
71 port:
72 number: 443
73NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
74kube-system dashboard public * 127.0.0.1 80 11m
75192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] "GET /dashboard HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
76
Does the dashboard also need to be exposed using the microk8s proxy
? I thought the ingress controller would take care of this, or did I misunderstand this?
ANSWER
Answered 2021-Oct-10 at 18:291127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60 name: dashboard
61 namespace: kube-system
62spec:
63 rules:
64 - http:
65 paths:
66 - path: /dashboard
67 pathType: Prefix
68 backend:
69 service:
70 name: kubernetes-dashboard
71 port:
72 number: 443
73NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
74kube-system dashboard public * 127.0.0.1 80 11m
75192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] "GET /dashboard HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
76error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
77
it' due to the mismatch in the ingress API version.
You are running the v1.22.2 while API version in YAML is old.
Good example : https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
you are using the older ingress API version in your YAML which is extensions/v1beta1
.
You need to change this based on ingress version and K8s version you are running.
This is for version 1.19 in K8s and will work in 1.22 also
Example :
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60 name: dashboard
61 namespace: kube-system
62spec:
63 rules:
64 - http:
65 paths:
66 - path: /dashboard
67 pathType: Prefix
68 backend:
69 service:
70 name: kubernetes-dashboard
71 port:
72 number: 443
73NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
74kube-system dashboard public * 127.0.0.1 80 11m
75192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] "GET /dashboard HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
76error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
77apiVersion: networking.k8s.io/v1
78kind: Ingress
79metadata:
80 name: minimal-ingress
81 annotations:
82 nginx.ingress.kubernetes.io/rewrite-target: /
83spec:
84 rules:
85 - http:
86 paths:
87 - path: /testpath
88 pathType: Prefix
89 backend:
90 service:
91 name: test
92 port:
93 number: 80
94
QUESTION
Laravel Homestead - page stopped working ERR_ADDRESS_UNREACHABLE
Asked 2022-Mar-25 at 09:10Took my laptop out of house for a couple of days, didn't even get to turn it on during that time. Came back, ready to keep fiddling with my project but the page stopped working all of a sudden. I started getting ERR_ADDRESS_UNREACHABLE in the browser.
I've uninstalled homestead box, vagrant, virtualbox, with restart after each, re installed everything, same issue.
I can not ping the 192.168.10.10
address but I can SSH into the box no problem.
Running MacOS Big Sur, VirtualBox 6.1, Vagrant 2.2.18 and whatever the latest homestead version is. Really about quit programming altogether, this is super frustrating. I'd really appreciate any help. Thank you
Homestead.yaml
1---
2ip: "192.168.10.10"
3memory: 2048
4cpus: 2
5provider: virtualbox
6
7authorize: ~/.ssh/id_rsa.pub
8
9keys:
10 - ~/.ssh/id_rsa
11
12folders:
13 - map: ~/Documents/Code
14 to: /home/vagrant/code
15
16sites:
17 - map: homestead.test
18 to: /home/vagrant/code/PHP/test/public
19
20databases:
21 - homestead
22
23features:
24 - mysql: true
25 - mariadb: false
26 - postgresql: false
27 - ohmyzsh: false
28 - webdriver: false
29
30services:
31 - enabled:
32 - "mysql"
33
Vagrantfile
1---
2ip: "192.168.10.10"
3memory: 2048
4cpus: 2
5provider: virtualbox
6
7authorize: ~/.ssh/id_rsa.pub
8
9keys:
10 - ~/.ssh/id_rsa
11
12folders:
13 - map: ~/Documents/Code
14 to: /home/vagrant/code
15
16sites:
17 - map: homestead.test
18 to: /home/vagrant/code/PHP/test/public
19
20databases:
21 - homestead
22
23features:
24 - mysql: true
25 - mariadb: false
26 - postgresql: false
27 - ohmyzsh: false
28 - webdriver: false
29
30services:
31 - enabled:
32 - "mysql"
33# -*- mode: ruby -*-
34# vi: set ft=ruby :
35
36require 'json'
37require 'yaml'
38
39VAGRANTFILE_API_VERSION ||= "2"
40confDir = $confDir ||= File.expand_path(File.dirname(__FILE__))
41
42homesteadYamlPath = confDir + "/Homestead.yaml"
43homesteadJsonPath = confDir + "/Homestead.json"
44afterScriptPath = confDir + "/after.sh"
45customizationScriptPath = confDir + "/user-customizations.sh"
46aliasesPath = confDir + "/aliases"
47
48require File.expand_path(File.dirname(__FILE__) + '/scripts/homestead.rb')
49
50Vagrant.require_version '>= 2.2.4'
51
52
53Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
54 if File.exist? aliasesPath then
55 config.vm.provision "file", source: aliasesPath, destination: "/tmp/bash_aliases"
56 config.vm.provision "handle_aliases", type: "shell" do |s|
57 s.inline = "awk '{ sub(\"\r$\", \"\"); print }' /tmp/bash_aliases > /home/vagrant/.bash_aliases && chown vagrant:vagrant /home/vagrant/.bash_aliases"
58 end
59 end
60
61 if File.exist? homesteadYamlPath then
62 settings = YAML::load(File.read(homesteadYamlPath))
63 elsif File.exist? homesteadJsonPath then
64 settings = JSON::parse(File.read(homesteadJsonPath))
65 else
66 abort "Homestead settings file not found in #{confDir}"
67 end
68
69 Homestead.configure(config, settings)
70
71 if File.exist? afterScriptPath then
72 config.vm.provision "Run after.sh", type: "shell", path: afterScriptPath, privileged: false, keep_color: true
73 end
74
75 if File.exist? customizationScriptPath then
76 config.vm.provision "Run customize script", type: "shell", path: customizationScriptPath, privileged: false, keep_color: true
77 end
78
79 if Vagrant.has_plugin?('vagrant-hostsupdater')
80 config.hostsupdater.remove_on_suspend = false
81 config.hostsupdater.aliases = settings['sites'].map { |site| site['map'] }
82 elsif Vagrant.has_plugin?('vagrant-hostmanager')
83 config.hostmanager.enabled = true
84 config.hostmanager.manage_host = true
85 config.hostmanager.aliases = settings['sites'].map { |site| site['map'] }
86 elsif Vagrant.has_plugin?('vagrant-goodhosts')
87 config.goodhosts.aliases = settings['sites'].map { |site| site['map'] }
88 end
89
90 if Vagrant.has_plugin?('vagrant-notify-forwarder')
91 config.notify_forwarder.enable = true
92 end
93end
94
I did try to setup networking as described here and here but nothing worked.
ANSWER
Answered 2021-Oct-29 at 20:41I think this is the fix, but I couldn't get it running until now:
Anything in the 192.68.56.0/21 range will work out-of-the-box without any custom configuration per VirtualBox's documentation.
https://github.com/laravel/homestead/issues/1717
Found some more related information here:
https://discuss.hashicorp.com/t/vagrant-2-2-18-osx-11-6-cannot-create-private-network/30984/16
update 29.10.2021:
I downgraded virtualbox to 6.1.26 and it's working again.
QUESTION
Accessing PostgreSQL (on wsl2) from DBeaver (on Windows) fails: "Connection refused: connect"
Asked 2022-Mar-17 at 04:30What I'm trying is to use Postgres and access it from DBeaver.
- Postgres is installed into wsl2 (Ubuntu 20)
- DBeaver is installed into Windows 10
According to this doc, if you access an app running on Linuc from Windows, localhost
can be used.
However...
Connection is refused with localhost
. Also, I don't know what this message means: Connection refused: connect
.
Does anyone see potential cause for this? Any advice will be appreciated.
Note:
- The password should be fine. When I use
psql
in wsl2 and type in the password,psql
is available with the password - I don't have Postgres on Windows' side. It exists only on wsl2
ANSWER
Answered 2021-Oct-19 at 08:19I found a solution by myself.
I just had to allow the TCP connection on wsl2(Ubuntu) and then restart postgres.
1sudo ufw allow 5432/tcp
2# You should see "Rules updated" and/or "Rules updated (v6)"
3sudo service postgresql restart
4
I didn't change IPv4/IPv6 connections info. Here's what I see in pg_hba.conf
:
1sudo ufw allow 5432/tcp
2# You should see "Rules updated" and/or "Rules updated (v6)"
3sudo service postgresql restart
4# IPv4 local connections:
5host all all 127.0.0.1/32 md5
6# IPv6 local connections:
7host all all ::1/128 md5
8
QUESTION
Why URL re-writing is not working when I do not use slash at the end?
Asked 2022-Mar-13 at 20:40I have a simple ingress configuration file-
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 annotations:
5 nginx.ingress.kubernetes.io/rewrite-target: /link2/link3/
6 name: tut-ingress
7 namespace: default
8spec:
9 rules:
10 - host: tutorial.com
11 http:
12 paths:
13 - path: /link1/
14 pathType: Prefix
15 backend:
16 service:
17 name: nginx-ingress-tut-service
18 port:
19 number: 8080
20
in which requests coming to /link1
or /link1/
are rewritten to
/link2/link3/
.
When I access it using http://tutorial.com/link1/
I am shown the correct result but when I access it using
http://tutorial.com/link1
, I get a 404 not found.
The nginx-ingress-tut-service
has the following endpoints-
/
/link1
/link2/link3
I am a beginner in the web domain, any help will be appreciated.
When I change it to-
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 annotations:
5 nginx.ingress.kubernetes.io/rewrite-target: /link2/link3/
6 name: tut-ingress
7 namespace: default
8spec:
9 rules:
10 - host: tutorial.com
11 http:
12 paths:
13 - path: /link1/
14 pathType: Prefix
15 backend:
16 service:
17 name: nginx-ingress-tut-service
18 port:
19 number: 8080
20- path: /link1
21
it starts working fine, but can anybody tell why is it not working with /link1/
.
Helpful resources - https://kubernetes.io/docs/concepts/services-networking/ingress/#examples
https://kubernetes.github.io/ingress-nginx/examples/rewrite/
Edit- Please also explain what happens when you write a full HTTP link in
nginx.ingress.kubernetes.io/rewrite-target
ANSWER
Answered 2022-Mar-13 at 20:40The answer is posted in the comment:
Well,
/link1/
is not a prefix of/link1
because a prefix must be the same length or longer than the target string
If you have
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 annotations:
5 nginx.ingress.kubernetes.io/rewrite-target: /link2/link3/
6 name: tut-ingress
7 namespace: default
8spec:
9 rules:
10 - host: tutorial.com
11 http:
12 paths:
13 - path: /link1/
14 pathType: Prefix
15 backend:
16 service:
17 name: nginx-ingress-tut-service
18 port:
19 number: 8080
20- path: /link1
21- path: /link1/
22
the string to match will have to have a /
character at the end of the path. Everything works correctly. In this situation if you try to access by the link http://tutorial.com/link1
you will get 404 error, because ingress was expecting http://tutorial.com/link1/
.
For more you can see examples of rewrite rule and documentation about path types:
Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit
pathType
will fail validation. There are three supported path types:
ImplementationSpecific
: With this path type, matching is up to the IngressClass. Implementations can treat this as a separatepathType
or treat it identically toPrefix
orExact
path types.
Exact
: Matches the URL path exactly and with case sensitivity.
Prefix
: Matches based on a URL path prefix split by/
. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the/
separator. A request is a match for path p if every p is an element-wise prefix of p of the request path.
EDIT: Based on documentation this should work, but it looks like there is a fresh problem with nginx ingress. The problem is still unresolved. You can use workaround posted in this topic or try to change your you similar to this:
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 annotations:
5 nginx.ingress.kubernetes.io/rewrite-target: /link2/link3/
6 name: tut-ingress
7 namespace: default
8spec:
9 rules:
10 - host: tutorial.com
11 http:
12 paths:
13 - path: /link1/
14 pathType: Prefix
15 backend:
16 service:
17 name: nginx-ingress-tut-service
18 port:
19 number: 8080
20- path: /link1
21- path: /link1/
22- path: /link1(/|$)
23
QUESTION
Standard compliant host to network endianess conversion
Asked 2022-Mar-03 at 15:19I am amazed at how many topics on StackOverflow deal with finding out the endianess of the system and converting endianess. I am even more amazed that there are hundreds of different answers to these two questions. All proposed solutions that I have seen so far are based on undefined behaviour, non-standard compiler extensions or OS-specific header files. In my opinion, this question is only a duplicate if an existing answer gives a standard-compliant, efficient (e.g., use x86-bswap
), compile time-enabled solution.
Surely there must be a standard-compliant solution available that I am unable to find in the huge mess of old "hacky" ones. It is also somewhat strange that the standard library does not include such a function. Perhaps the attitude towards such issues is changing, since C++20 introduced a way to detect endianess into the standard (via std::endian
), and C++23 will probably include std::byteswap
, which flips endianess.
In any case, my questions are these:
Starting at what C++ standard is there a portable standard-compliant way of performing host to network byte order conversion?
I argue below that it's possible in C++20. Is my code correct and can it be improved?
Should such a pure-c++ solution be preferred to OS specific functions such as, e.g., POSIX-
htonl
? (I think yes)
I think I can give a C++23 solution that is OS-independent, efficient (no system call, uses x86-bswap
) and portable to little-endian and big-endian systems (but not portable to mixed-endian systems):
1// requires C++23. see https://gcc.godbolt.org/z/6or1sEvKn
2#include <type_traits>
3#include <utility>
4#include <bit>
5
6constexpr inline auto host_to_net(std::integral auto i) {
7 static_assert(std::endian::native == std::endian::big || std::endian::native == std::endian::little);
8 if constexpr (std::endian::native == std::endian::big) {
9 return i;
10 } else {
11 return std::byteswap(i);
12 }
13}
14
Since std::endian
is available in C++20, one can give a C++20 solution for host_to_net
by implementing byteswap
manually. A solution is described here, quote:
1// requires C++23. see https://gcc.godbolt.org/z/6or1sEvKn
2#include <type_traits>
3#include <utility>
4#include <bit>
5
6constexpr inline auto host_to_net(std::integral auto i) {
7 static_assert(std::endian::native == std::endian::big || std::endian::native == std::endian::little);
8 if constexpr (std::endian::native == std::endian::big) {
9 return i;
10 } else {
11 return std::byteswap(i);
12 }
13}
14// requires C++17
15#include <climits>
16#include <cstdint>
17#include <type_traits>
18
19template<class T, std::size_t... N>
20constexpr T bswap_impl(T i, std::index_sequence<N...>) {
21 return ((((i >> (N * CHAR_BIT)) & (T)(unsigned char)(-1)) <<
22 ((sizeof(T) - 1 - N) * CHAR_BIT)) | ...);
23}; // ^~~~~ fold expression
24template<class T, class U = typename std::make_unsigned<T>::type>
25constexpr U bswap(T i) {
26 return bswap_impl<U>(i, std::make_index_sequence<sizeof(T)>{});
27}
28
The linked answer also provides a C++11 byteswap
, but that one seems to be less efficient (not compiled to x86-bswap
). I think there should be an efficient C++11 way of doing this, too (using either less template-nonsense or even more) but I don't care about older C++ and didn't really try.
Assuming I am correct, the remaining question is: can one can determine system endianess before C++20 at compile time in a standard-compliant and compiler-agnostic way? None of the answers here seem to do achieve this. They use reinterpret_cast
(not compile time), OS-headers, union aliasing (which I believe is UB in C++), etc. Also, for some reason, they try to do it "at runtime" although a compiled executable will always run under the same endianess.)
One could do it outside of constexpr context and hope it's optimized away. On the other hand, one could use system-defined preprocessor definitions and account for all platforms, as seems to be the approach taken by Boost. Or maybe (although I would guess the other way is better?) use macros and pick platform-specific htnl
-style functions from networking libraries(done, e.g., here (GitHub))?
ANSWER
Answered 2022-Feb-06 at 05:48compile time-enabled solution.
Consider whether this is useful requirement in the first place. The program isn't going to be communicating with another system at compile time. What is the case where you would need to use the serialised integer in a compile time constant context?
- Starting at what C++ standard is there a portable standard-compliant way of performing host to network byte order conversion?
It's possible to write such function in standard C++ since C++98. That said, later standards bring tasty template goodies that make this nicer.
There isn't such function in the standard library as of the latest standard.
- Should such a pure-c++ solution be preferred to OS specific functions such as, e.g., POSIX-htonl? (I think yes)
Advantage of POSIX is that it's less important to write tests to make sure that it works correctly.
Advantage of pure C++ function is that you don't need platform specific alternatives to those that don't conform to POSIX.
Also, the POSIX htonX are only for 16 bit and 32 bit integers. You could instead use htobeXX functions instead that are in some *BSD and in Linux (glibc).
Here is what I have been using since C+17. Some notes beforehand:
Since endianness conversion is always1 for purposes of serialisation, I write the result directly into a buffer. When converting to host endianness, I read from a buffer.
I don't use
CHAR_BIT
because network doesn't know my byte size anyway. Network byte is an octet, and if your CPU is different, then these functions won't work. Correct handling of non-octet byte is possible but unnecessary work unless you need to support network communication on such system. Adding an assert might be a good idea.I prefer to call it big endian rather than "network" endian. There's a chance that a reader isn't aware of the convention that de-facto endianness of network is big.
Instead of checking "if native endianness is X, do Y else do Z", I prefer to write a function that works with all native endianness. This can be done with bit shifts.
Yeah, it's constexpr. Not because it needs to be, but just because it can be. I haven't been able to produce an example where dropping constexpr would produce worse code.
1// requires C++23. see https://gcc.godbolt.org/z/6or1sEvKn
2#include <type_traits>
3#include <utility>
4#include <bit>
5
6constexpr inline auto host_to_net(std::integral auto i) {
7 static_assert(std::endian::native == std::endian::big || std::endian::native == std::endian::little);
8 if constexpr (std::endian::native == std::endian::big) {
9 return i;
10 } else {
11 return std::byteswap(i);
12 }
13}
14// requires C++17
15#include <climits>
16#include <cstdint>
17#include <type_traits>
18
19template<class T, std::size_t... N>
20constexpr T bswap_impl(T i, std::index_sequence<N...>) {
21 return ((((i >> (N * CHAR_BIT)) & (T)(unsigned char)(-1)) <<
22 ((sizeof(T) - 1 - N) * CHAR_BIT)) | ...);
23}; // ^~~~~ fold expression
24template<class T, class U = typename std::make_unsigned<T>::type>
25constexpr U bswap(T i) {
26 return bswap_impl<U>(i, std::make_index_sequence<sizeof(T)>{});
27}
28// helper to promote an integer type
29template <class T>
30using promote_t = std::decay_t<decltype(+std::declval<T>())>;
31
32template <class T, std::size_t... I>
33constexpr void
34host_to_big_impl(
35 unsigned char* buf,
36 T t,
37 [[maybe_unused]] std::index_sequence<I...>) noexcept
38{
39 using U = std::make_unsigned_t<promote_t<T>>;
40 constexpr U lastI = sizeof(T) - 1u;
41 constexpr U bits = 8u;
42 U u = t;
43 ( (buf[I] = u >> ((lastI - I) * bits)), ... );
44}
45
46
47template <class T, std::size_t... I>
48constexpr void
49host_to_big(unsigned char* buf, T t) noexcept
50{
51 using Indices = std::make_index_sequence<sizeof(T)>;
52 return host_to_big_impl<T>(buf, t, Indices{});
53}
54
1 In all use cases I've encountered. Conversions from integer to integer can be implemented by delegating these if you have such case, although they cannot be constexpr due to need for reinterpret_cast.
QUESTION
How to configure proxy in emulators in new versions of Android Studio?
Asked 2022-Feb-23 at 14:14I need to configure the proxy manually in my emulator through Android Studio. From the official Android documentation, it is suggested that this change can be made in the "settings" tab of the emulator's extended controls. The problem is that it seems to me that this documentation is outdated, as this setting is no longer displayed in the "settings" tab of the Android Studio emulators' extended controls.
Documentation My Android Studio My version of Android Studio1Android Studio Bumblebee | 2021.1.1
2Build #AI-211.7628.21.2111.8092744, built on January 19, 2022
3Runtime version: 11.0.11+9-b60-7590822 amd64
4VM: OpenJDK 64-Bit Server VM by Oracle Corporation
5Windows 10 10.0
6GC: G1 Young Generation, G1 Old Generation
7Memory: 1280M
8Cores: 8
9Registry: external.system.auto.import.disabled=true
10Non-Bundled Plugins: com.wakatime.intellij.plugin (13.1.10), wu.seal.tool.jsontokotlin (3.7.2), org.jetbrains.kotlin (211-1.6.10-release-923-AS7442.40), com.developerphil.adbidea (1.6.4), org.jetbrains.compose.desktop.ide (1.0.0), ru.adelf.idea.dotenv (2021.2), org.intellij.plugins.markdown (211.7142.37)
11
ANSWER
Answered 2022-Feb-17 at 19:12After a while trying to find solutions to this problem, I saw that an emulator running outside android studio provides these options. To run a standalone Android Studio emulator see the official documentation or simply enter the command:
1Android Studio Bumblebee | 2021.1.1
2Build #AI-211.7628.21.2111.8092744, built on January 19, 2022
3Runtime version: 11.0.11+9-b60-7590822 amd64
4VM: OpenJDK 64-Bit Server VM by Oracle Corporation
5Windows 10 10.0
6GC: G1 Young Generation, G1 Old Generation
7Memory: 1280M
8Cores: 8
9Registry: external.system.auto.import.disabled=true
10Non-Bundled Plugins: com.wakatime.intellij.plugin (13.1.10), wu.seal.tool.jsontokotlin (3.7.2), org.jetbrains.kotlin (211-1.6.10-release-923-AS7442.40), com.developerphil.adbidea (1.6.4), org.jetbrains.compose.desktop.ide (1.0.0), ru.adelf.idea.dotenv (2021.2), org.intellij.plugins.markdown (211.7142.37)
11emulator -avd <avd_name>
12
In my case I'm using an avd named PIXEL 4 API 30
, so the command will be emulator -avd PIXEL_4_API_30
. If you are on Windows you may have problems running this command so I suggest you see this.
The solution proposed by @Inliner also solves this problem.
QUESTION
Unable to log egress traffic HTTP requests with the istio-proxy
Asked 2022-Feb-11 at 10:45I am following this guide.
Ingress requests are getting logged. Egress traffic control is working as expected, except I am unable to log egress HTTP requests. What is missing?
1apiVersion: networking.istio.io/v1beta1
2kind: Sidecar
3metadata:
4 name: myapp
5spec:
6 workloadSelector:
7 labels:
8 app: myapp
9
10 outboundTrafficPolicy:
11 mode: REGISTRY_ONLY
12
13 egress:
14 - hosts:
15 - default/*.example.com
16
1apiVersion: networking.istio.io/v1beta1
2kind: Sidecar
3metadata:
4 name: myapp
5spec:
6 workloadSelector:
7 labels:
8 app: myapp
9
10 outboundTrafficPolicy:
11 mode: REGISTRY_ONLY
12
13 egress:
14 - hosts:
15 - default/*.example.com
16apiVersion: networking.istio.io/v1alpha3
17kind: ServiceEntry
18metadata:
19 name: example
20
21spec:
22 location: MESH_EXTERNAL
23 resolution: NONE
24 hosts:
25 - '*.example.com'
26
27 ports:
28 - name: https
29 protocol: TLS
30 number: 443
31
1apiVersion: networking.istio.io/v1beta1
2kind: Sidecar
3metadata:
4 name: myapp
5spec:
6 workloadSelector:
7 labels:
8 app: myapp
9
10 outboundTrafficPolicy:
11 mode: REGISTRY_ONLY
12
13 egress:
14 - hosts:
15 - default/*.example.com
16apiVersion: networking.istio.io/v1alpha3
17kind: ServiceEntry
18metadata:
19 name: example
20
21spec:
22 location: MESH_EXTERNAL
23 resolution: NONE
24 hosts:
25 - '*.example.com'
26
27 ports:
28 - name: https
29 protocol: TLS
30 number: 443
31apiVersion: telemetry.istio.io/v1alpha1
32kind: Telemetry
33metadata:
34 name: mesh-default
35 namespace: istio-system
36spec:
37 accessLogging:
38 - providers:
39 - name: envoy
40
41
Kubernetes 1.22.2 Istio 1.11.4
ANSWER
Answered 2022-Feb-07 at 17:14AFAIK istio collects only ingress HTTP logs by default.
In the istio documentation there is an old article (from 2018) describing how to enable egress traffic HTTP logs.
Please keep in mind that some of the information may be outdated, however I believe this is the part that you are missing.
QUESTION
Dynamodb local web shell does not load
Asked 2022-Jan-15 at 14:55I am running DynamoDB locally using the instructions here. To remove potential docker networking issues I am using the "Download Locally" version of the instructions. Before running dynamo locally I run aws configure
to set some fake values for AWS access, secret, and region, and here is the output:
1$ aws configure
2AWS Access Key ID [****************fake]:
3AWS Secret Access Key [****************ake2]:
4Default region name [local]:
5Default output format [json]:
6
here is the output of running dynamo locally:
1$ aws configure
2AWS Access Key ID [****************fake]:
3AWS Secret Access Key [****************ake2]:
4Default region name [local]:
5Default output format [json]:
6$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
7Initializing DynamoDB Local with the following configuration:
8Port: 8000
9InMemory: false
10DbPath: null
11SharedDb: true
12shouldDelayTransientStatuses: false
13CorsParams: *
14
I can confirm that the DynamoDB is running locally successfully by listing tables using aws cli
1$ aws configure
2AWS Access Key ID [****************fake]:
3AWS Secret Access Key [****************ake2]:
4Default region name [local]:
5Default output format [json]:
6$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
7Initializing DynamoDB Local with the following configuration:
8Port: 8000
9InMemory: false
10DbPath: null
11SharedDb: true
12shouldDelayTransientStatuses: false
13CorsParams: *
14$ aws dynamodb list-tables --endpoint-url http://localhost:8000
15{
16 "TableNames": []
17}
18
but when I visit http://localhost:8000/shell in my browser, this is the error I get and the page does not load.
I tried running curl on the shell to see if I can get a more useful error message:
1$ aws configure
2AWS Access Key ID [****************fake]:
3AWS Secret Access Key [****************ake2]:
4Default region name [local]:
5Default output format [json]:
6$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
7Initializing DynamoDB Local with the following configuration:
8Port: 8000
9InMemory: false
10DbPath: null
11SharedDb: true
12shouldDelayTransientStatuses: false
13CorsParams: *
14$ aws dynamodb list-tables --endpoint-url http://localhost:8000
15{
16 "TableNames": []
17}
18$ curl http://localhost:8000/shell
19{
20"__type":"com.amazonaws.dynamodb.v20120810#MissingAuthenticationToken",
21"Message":"Request must contain either a valid (registered) AWS access key ID or X.509 certificate."}%
22
I tried looking up the error above, but I don't have much choice in doing setup when running the shell merely in the browser. Any help is appreciated on how I can run the Dynamodb javascript web shell with this setting.
Software versions:
aws cli: aws-cli/2.4.7 Python/3.9.9 Darwin/20.6.0 source/x86_64 prompt/off
OS: MacOS Big Sur 11.6.2 (20G314)
ANSWER
Answered 2022-Jan-13 at 08:12As I answered in DynamoDB local http://localhost:8000/shell this appears to be a regression in new versions of DynamoDB Local, where the shell mysteriously stopped working, whereas in versions from a year ago it does work.
Somebody should report it to Amazon. If there is some flag that new versions require you to set to enable the shell, it isn't documented anywhere that I can find.
QUESTION
Cancelling an async/await Network Request
Asked 2022-Jan-03 at 22:23I have a networking layer that currently uses completion handlers to deliver a result on the operation is complete.
As I support a number of iOS versions, I instead extend the network layer within the app to provide support for Combine. I'd like to extend this to now also a support Async/Await but I am struggling to understand how I can achieve this in a way that allows me to cancel requests.
The basic implementation looks like;
1
2protocol HTTPClientTask {
3 func cancel()
4}
5
6protocol HTTPClient {
7 typealias Result = Swift.Result<(data: Data, response: HTTPURLResponse), Error>
8 @discardableResult
9 func dispatch(_ request: URLRequest, completion: @escaping (Result) -> Void) -> HTTPClientTask
10}
11
12final class URLSessionHTTPClient: HTTPClient {
13
14 private let session: URLSession
15
16 init(session: URLSession) {
17 self.session = session
18 }
19
20 func dispatch(_ request: URLRequest, completion: @escaping (HTTPClient.Result) -> Void) -> HTTPClientTask {
21 let task = session.dataTask(with: request) { data, response, error in
22 completion(Result {
23 if let error = error {
24 throw error
25 } else if let data = data, let response = response as? HTTPURLResponse {
26 return (data, response)
27 } else {
28 throw UnexpectedValuesRepresentation()
29 }
30 })
31 }
32 task.resume()
33 return URLSessionTaskWrapper(wrapped: task)
34 }
35}
36
37private extension URLSessionHTTPClient {
38 struct UnexpectedValuesRepresentation: Error {}
39
40 struct URLSessionTaskWrapper: HTTPClientTask {
41 let wrapped: URLSessionTask
42
43 func cancel() {
44 wrapped.cancel()
45 }
46 }
47}
48
It very simply provides an abstraction that allows me to inject a URLSession
instance.
By returning HTTPClientTask
I can call cancel
from a client and end the request.
I extend this in a client app using Combine
as follows;
1
2protocol HTTPClientTask {
3 func cancel()
4}
5
6protocol HTTPClient {
7 typealias Result = Swift.Result<(data: Data, response: HTTPURLResponse), Error>
8 @discardableResult
9 func dispatch(_ request: URLRequest, completion: @escaping (Result) -> Void) -> HTTPClientTask
10}
11
12final class URLSessionHTTPClient: HTTPClient {
13
14 private let session: URLSession
15
16 init(session: URLSession) {
17 self.session = session
18 }
19
20 func dispatch(_ request: URLRequest, completion: @escaping (HTTPClient.Result) -> Void) -> HTTPClientTask {
21 let task = session.dataTask(with: request) { data, response, error in
22 completion(Result {
23 if let error = error {
24 throw error
25 } else if let data = data, let response = response as? HTTPURLResponse {
26 return (data, response)
27 } else {
28 throw UnexpectedValuesRepresentation()
29 }
30 })
31 }
32 task.resume()
33 return URLSessionTaskWrapper(wrapped: task)
34 }
35}
36
37private extension URLSessionHTTPClient {
38 struct UnexpectedValuesRepresentation: Error {}
39
40 struct URLSessionTaskWrapper: HTTPClientTask {
41 let wrapped: URLSessionTask
42
43 func cancel() {
44 wrapped.cancel()
45 }
46 }
47}
48extension HTTPClient {
49 typealias Publisher = AnyPublisher<(data: Data, response: HTTPURLResponse), Error>
50
51 func dispatchPublisher(for request: URLRequest) -> Publisher {
52 var task: HTTPClientTask?
53
54 return Deferred {
55 Future { completion in
56 task = self.dispatch(request, completion: completion)
57 }
58 }
59 .handleEvents(receiveCancel: { task?.cancel() })
60 .eraseToAnyPublisher()
61 }
62}
63
As you can see I now have an interface that supports canceling tasks.
Using async/await
however, I am unsure what this should look like, how I can provide a mechanism for canceling requests.
My current attempt is;
1
2protocol HTTPClientTask {
3 func cancel()
4}
5
6protocol HTTPClient {
7 typealias Result = Swift.Result<(data: Data, response: HTTPURLResponse), Error>
8 @discardableResult
9 func dispatch(_ request: URLRequest, completion: @escaping (Result) -> Void) -> HTTPClientTask
10}
11
12final class URLSessionHTTPClient: HTTPClient {
13
14 private let session: URLSession
15
16 init(session: URLSession) {
17 self.session = session
18 }
19
20 func dispatch(_ request: URLRequest, completion: @escaping (HTTPClient.Result) -> Void) -> HTTPClientTask {
21 let task = session.dataTask(with: request) { data, response, error in
22 completion(Result {
23 if let error = error {
24 throw error
25 } else if let data = data, let response = response as? HTTPURLResponse {
26 return (data, response)
27 } else {
28 throw UnexpectedValuesRepresentation()
29 }
30 })
31 }
32 task.resume()
33 return URLSessionTaskWrapper(wrapped: task)
34 }
35}
36
37private extension URLSessionHTTPClient {
38 struct UnexpectedValuesRepresentation: Error {}
39
40 struct URLSessionTaskWrapper: HTTPClientTask {
41 let wrapped: URLSessionTask
42
43 func cancel() {
44 wrapped.cancel()
45 }
46 }
47}
48extension HTTPClient {
49 typealias Publisher = AnyPublisher<(data: Data, response: HTTPURLResponse), Error>
50
51 func dispatchPublisher(for request: URLRequest) -> Publisher {
52 var task: HTTPClientTask?
53
54 return Deferred {
55 Future { completion in
56 task = self.dispatch(request, completion: completion)
57 }
58 }
59 .handleEvents(receiveCancel: { task?.cancel() })
60 .eraseToAnyPublisher()
61 }
62}
63extension HTTPClient {
64 func dispatch(_ request: URLRequest) async -> HTTPClient.Result {
65
66 let task = Task { () -> (data: Data, response: HTTPURLResponse) in
67 return try await withCheckedThrowingContinuation { continuation in
68 self.dispatch(request) { result in
69 switch result {
70 case let .success(values): continuation.resume(returning: values)
71 case let .failure(error): continuation.resume(throwing: error)
72 }
73 }
74 }
75 }
76
77 do {
78 let output = try await task.value
79 return .success(output)
80 } catch {
81 return .failure(error)
82 }
83 }
84}
85
However this simply provides the async
implementation, I am unable to cancel this.
How should this be handled?
ANSWER
Answered 2021-Oct-10 at 13:42async/await might not be the proper paradigm if you want cancellation. The reason is that the new structured concurrency support in Swift allows you to write code that looks single-threaded/synchronous, but it fact it's multi-threaded.
Take for example a naive synchronous code:
1
2protocol HTTPClientTask {
3 func cancel()
4}
5
6protocol HTTPClient {
7 typealias Result = Swift.Result<(data: Data, response: HTTPURLResponse), Error>
8 @discardableResult
9 func dispatch(_ request: URLRequest, completion: @escaping (Result) -> Void) -> HTTPClientTask
10}
11
12final class URLSessionHTTPClient: HTTPClient {
13
14 private let session: URLSession
15
16 init(session: URLSession) {
17 self.session = session
18 }
19
20 func dispatch(_ request: URLRequest, completion: @escaping (HTTPClient.Result) -> Void) -> HTTPClientTask {
21 let task = session.dataTask(with: request) { data, response, error in
22 completion(Result {
23 if let error = error {
24 throw error
25 } else if let data = data, let response = response as? HTTPURLResponse {
26 return (data, response)
27 } else {
28 throw UnexpectedValuesRepresentation()
29 }
30 })
31 }
32 task.resume()
33 return URLSessionTaskWrapper(wrapped: task)
34 }
35}
36
37private extension URLSessionHTTPClient {
38 struct UnexpectedValuesRepresentation: Error {}
39
40 struct URLSessionTaskWrapper: HTTPClientTask {
41 let wrapped: URLSessionTask
42
43 func cancel() {
44 wrapped.cancel()
45 }
46 }
47}
48extension HTTPClient {
49 typealias Publisher = AnyPublisher<(data: Data, response: HTTPURLResponse), Error>
50
51 func dispatchPublisher(for request: URLRequest) -> Publisher {
52 var task: HTTPClientTask?
53
54 return Deferred {
55 Future { completion in
56 task = self.dispatch(request, completion: completion)
57 }
58 }
59 .handleEvents(receiveCancel: { task?.cancel() })
60 .eraseToAnyPublisher()
61 }
62}
63extension HTTPClient {
64 func dispatch(_ request: URLRequest) async -> HTTPClient.Result {
65
66 let task = Task { () -> (data: Data, response: HTTPURLResponse) in
67 return try await withCheckedThrowingContinuation { continuation in
68 self.dispatch(request) { result in
69 switch result {
70 case let .success(values): continuation.resume(returning: values)
71 case let .failure(error): continuation.resume(throwing: error)
72 }
73 }
74 }
75 }
76
77 do {
78 let output = try await task.value
79 return .success(output)
80 } catch {
81 return .failure(error)
82 }
83 }
84}
85let data = tryData(contentsOf: fileURL)
86
If the file is huge, then it might take a lot of time for the operation to finish, and during this time the operation cannot be cancelled, and the caller thread is blocked.
Now, assuming Data
exports an async version of the above initializer, you'd write the async version of the code similar to this:
1
2protocol HTTPClientTask {
3 func cancel()
4}
5
6protocol HTTPClient {
7 typealias Result = Swift.Result<(data: Data, response: HTTPURLResponse), Error>
8 @discardableResult
9 func dispatch(_ request: URLRequest, completion: @escaping (Result) -> Void) -> HTTPClientTask
10}
11
12final class URLSessionHTTPClient: HTTPClient {
13
14 private let session: URLSession
15
16 init(session: URLSession) {
17 self.session = session
18 }
19
20 func dispatch(_ request: URLRequest, completion: @escaping (HTTPClient.Result) -> Void) -> HTTPClientTask {
21 let task = session.dataTask(with: request) { data, response, error in
22 completion(Result {
23 if let error = error {
24 throw error
25 } else if let data = data, let response = response as? HTTPURLResponse {
26 return (data, response)
27 } else {
28 throw UnexpectedValuesRepresentation()
29 }
30 })
31 }
32 task.resume()
33 return URLSessionTaskWrapper(wrapped: task)
34 }
35}
36
37private extension URLSessionHTTPClient {
38 struct UnexpectedValuesRepresentation: Error {}
39
40 struct URLSessionTaskWrapper: HTTPClientTask {
41 let wrapped: URLSessionTask
42
43 func cancel() {
44 wrapped.cancel()
45 }
46 }
47}
48extension HTTPClient {
49 typealias Publisher = AnyPublisher<(data: Data, response: HTTPURLResponse), Error>
50
51 func dispatchPublisher(for request: URLRequest) -> Publisher {
52 var task: HTTPClientTask?
53
54 return Deferred {
55 Future { completion in
56 task = self.dispatch(request, completion: completion)
57 }
58 }
59 .handleEvents(receiveCancel: { task?.cancel() })
60 .eraseToAnyPublisher()
61 }
62}
63extension HTTPClient {
64 func dispatch(_ request: URLRequest) async -> HTTPClient.Result {
65
66 let task = Task { () -> (data: Data, response: HTTPURLResponse) in
67 return try await withCheckedThrowingContinuation { continuation in
68 self.dispatch(request) { result in
69 switch result {
70 case let .success(values): continuation.resume(returning: values)
71 case let .failure(error): continuation.resume(throwing: error)
72 }
73 }
74 }
75 }
76
77 do {
78 let output = try await task.value
79 return .success(output)
80 } catch {
81 return .failure(error)
82 }
83 }
84}
85let data = tryData(contentsOf: fileURL)
86let data = try await Data(contentsOf: fileURL)
87
For the developer, it's the same coding style, once the operation finishes, they'll either have a data
variable to use, or they'll be receiving an error.
In both cases, there's no cancellation built in, as the operation is synchronous from the developer's perspective. The major difference is that the await-ed call doesn't block the caller thread, but on the other hand once the control flow returns it might well be that the code continues executing on a different thread.
Now, if you need support for cancellation, then you'll have to store somewhere some identifiable data that can be used to cancel the operation.
If you'll want to store those identifiers from the caller scope, then you'll need to split your operation in two: initialization, and execution.
Something along the lines of
1
2protocol HTTPClientTask {
3 func cancel()
4}
5
6protocol HTTPClient {
7 typealias Result = Swift.Result<(data: Data, response: HTTPURLResponse), Error>
8 @discardableResult
9 func dispatch(_ request: URLRequest, completion: @escaping (Result) -> Void) -> HTTPClientTask
10}
11
12final class URLSessionHTTPClient: HTTPClient {
13
14 private let session: URLSession
15
16 init(session: URLSession) {
17 self.session = session
18 }
19
20 func dispatch(_ request: URLRequest, completion: @escaping (HTTPClient.Result) -> Void) -> HTTPClientTask {
21 let task = session.dataTask(with: request) { data, response, error in
22 completion(Result {
23 if let error = error {
24 throw error
25 } else if let data = data, let response = response as? HTTPURLResponse {
26 return (data, response)
27 } else {
28 throw UnexpectedValuesRepresentation()
29 }
30 })
31 }
32 task.resume()
33 return URLSessionTaskWrapper(wrapped: task)
34 }
35}
36
37private extension URLSessionHTTPClient {
38 struct UnexpectedValuesRepresentation: Error {}
39
40 struct URLSessionTaskWrapper: HTTPClientTask {
41 let wrapped: URLSessionTask
42
43 func cancel() {
44 wrapped.cancel()
45 }
46 }
47}
48extension HTTPClient {
49 typealias Publisher = AnyPublisher<(data: Data, response: HTTPURLResponse), Error>
50
51 func dispatchPublisher(for request: URLRequest) -> Publisher {
52 var task: HTTPClientTask?
53
54 return Deferred {
55 Future { completion in
56 task = self.dispatch(request, completion: completion)
57 }
58 }
59 .handleEvents(receiveCancel: { task?.cancel() })
60 .eraseToAnyPublisher()
61 }
62}
63extension HTTPClient {
64 func dispatch(_ request: URLRequest) async -> HTTPClient.Result {
65
66 let task = Task { () -> (data: Data, response: HTTPURLResponse) in
67 return try await withCheckedThrowingContinuation { continuation in
68 self.dispatch(request) { result in
69 switch result {
70 case let .success(values): continuation.resume(returning: values)
71 case let .failure(error): continuation.resume(throwing: error)
72 }
73 }
74 }
75 }
76
77 do {
78 let output = try await task.value
79 return .success(output)
80 } catch {
81 return .failure(error)
82 }
83 }
84}
85let data = tryData(contentsOf: fileURL)
86let data = try await Data(contentsOf: fileURL)
87extension HTTPClient {
88 // note that this is not async
89 func task(for request: URLRequest) -> HTTPClientTask {
90 // ...
91 }
92}
93
94class HTTPClientTask {
95 func dispatch() async -> HTTPClient.Result {
96 // ...
97 }
98}
99
100let task = httpClient.task(for: urlRequest)
101self.theTask = task
102let result = await task.dispatch()
103
104// somewhere outside the await scope
105self.theTask.cancel()
106
QUESTION
How to configure GKE Autopilot w/Envoy & gRPC-Web
Asked 2021-Dec-14 at 20:31I have an application running on my local machine that uses React -> gRPC-Web -> Envoy -> Go app and everything runs with no problems. I'm trying to deploy this using GKE Autopilot and I just haven't been able to get the configuration right. I'm new to all of GCP/GKE, so I'm looking for help to figure out where I'm going wrong.
I was following this doc initially, even though I only have one gRPC service: https://cloud.google.com/architecture/exposing-grpc-services-on-gke-using-envoy-proxy
From what I've read, GKE Autopilot mode requires using External HTTP(s) load balancing instead of Network Load Balancing as described in the above solution, so I've been trying to get that to work. After a variety of attempts, my current strategy has an Ingress, BackendConfig, Service, and Deployment. The deployment has three containers: my app, an Envoy sidecar to transform the gRPC-Web requests and responses, and a cloud SQL proxy sidecar. I eventually want to be using TLS, but for now, I left that out so it wouldn't complicate things even more.
When I apply all of the configs, the backend service shows one backend in one zone and the health check fails. The health check is set for port 8080 and path /healthz which is what I think I've specified in the deployment config, but I'm suspicious because when I look at the details for the envoy-sidecar container, it shows the Readiness probe as: http-get HTTP://:0/healthz headers=x-envoy-livenessprobe:healthz. Does ":0" just mean it's using the default address and port for the container, or does indicate a config problem?
I've been reading various docs and just haven't been able to piece it all together. Is there an example somewhere that shows how this can be done? I've been searching and haven't found one.
My current configs are:
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 name: grammar-games-ingress
5 #annotations:
6 # If the class annotation is not specified it defaults to "gce".
7 # kubernetes.io/ingress.class: "gce"
8 # kubernetes.io/ingress.global-static-ip-name: <IP addr>
9spec:
10 defaultBackend:
11 service:
12 name: grammar-games-core
13 port:
14 number: 80
15---
16apiVersion: cloud.google.com/v1
17kind: BackendConfig
18metadata:
19 name: grammar-games-bec
20 annotations:
21 cloud.google.com/neg: '{"ingress": true}'
22spec:
23 sessionAffinity:
24 affinityType: "CLIENT_IP"
25 healthCheck:
26 checkIntervalSec: 15
27 port: 8080
28 type: HTTP
29 requestPath: /healthz
30 timeoutSec: 60
31---
32apiVersion: v1
33kind: Service
34metadata:
35 name: grammar-games-core
36 annotations:
37 cloud.google.com/neg: '{"ingress": true}'
38 cloud.google.com/app-protocols: '{"http":"HTTP"}'
39 cloud.google.com/backend-config: '{"default": "grammar-games-bec"}'
40spec:
41 type: ClusterIP
42 selector:
43 app: grammar-games-core
44 ports:
45 - name: http
46 protocol: TCP
47 port: 80
48 targetPort: 8080
49---
50apiVersion: apps/v1
51kind: Deployment
52metadata:
53 name: grammar-games-core
54spec:
55 # Two replicas for right now, just so I can see how RPC calls get directed.
56 # replicas: 2
57 selector:
58 matchLabels:
59 app: grammar-games-core
60 template:
61 metadata:
62 labels:
63 app: grammar-games-core
64 spec:
65 serviceAccountName: grammar-games-core-k8sa
66 containers:
67 - name: grammar-games-core
68 image: gcr.io/grammar-games/grammar-games-core:1.1.2
69 command:
70 - "/bin/grammar-games-core"
71 ports:
72 - containerPort: 52001
73 env:
74 - name: GAMESDB_USER
75 valueFrom:
76 secretKeyRef:
77 name: gamesdb-config
78 key: username
79 - name: GAMESDB_PASSWORD
80 valueFrom:
81 secretKeyRef:
82 name: gamesdb-config
83 key: password
84 - name: GAMESDB_DB_NAME
85 valueFrom:
86 secretKeyRef:
87 name: gamesdb-config
88 key: db-name
89 - name: GRPC_SERVER_PORT
90 value: '52001'
91 - name: GAMES_LOG_FILE_PATH
92 value: ''
93 - name: GAMESDB_LOG_LEVEL
94 value: 'debug'
95 resources:
96 requests:
97 # The proxy's memory use scales linearly with the number of active
98 # connections. Fewer open connections will use less memory. Adjust
99 # this value based on your application's requirements.
100 memory: "2Gi"
101 # The proxy's CPU use scales linearly with the amount of IO between
102 # the database and the application. Adjust this value based on your
103 # application's requirements.
104 cpu: "1"
105 readinessProbe:
106 exec:
107 command: ["/bin/grpc_health_probe", "-addr=:52001"]
108 initialDelaySeconds: 5
109 - name: cloud-sql-proxy
110 # It is recommended to use the latest version of the Cloud SQL proxy
111 # Make sure to update on a regular schedule!
112 image: gcr.io/cloudsql-docker/gce-proxy:1.24.0
113 command:
114 - "/cloud_sql_proxy"
115
116 # If connecting from a VPC-native GKE cluster, you can use the
117 # following flag to have the proxy connect over private IP
118 # - "-ip_address_types=PRIVATE"
119
120 # Replace DB_PORT with the port the proxy should listen on
121 # Defaults: MySQL: 3306, Postgres: 5432, SQLServer: 1433
122 - "-instances=grammar-games:us-east1:grammar-games-db=tcp:3306"
123 securityContext:
124 # The default Cloud SQL proxy image runs as the
125 # "nonroot" user and group (uid: 65532) by default.
126 runAsNonRoot: true
127 # Resource configuration depends on an application's requirements. You
128 # should adjust the following values based on what your application
129 # needs. For details, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
130 resources:
131 requests:
132 # The proxy's memory use scales linearly with the number of active
133 # connections. Fewer open connections will use less memory. Adjust
134 # this value based on your application's requirements.
135 memory: "2Gi"
136 # The proxy's CPU use scales linearly with the amount of IO between
137 # the database and the application. Adjust this value based on your
138 # application's requirements.
139 cpu: "1"
140 - name: envoy-sidecar
141 image: envoyproxy/envoy:v1.20-latest
142 ports:
143 - name: http
144 containerPort: 8080
145 resources:
146 requests:
147 cpu: 10m
148 ephemeral-storage: 256Mi
149 memory: 256Mi
150 volumeMounts:
151 - name: config
152 mountPath: /etc/envoy
153 readinessProbe:
154 httpGet:
155 port: http
156 httpHeaders:
157 - name: x-envoy-livenessprobe
158 value: healthz
159 path: /healthz
160 scheme: HTTP
161 volumes:
162 - name: config
163 configMap:
164 name: envoy-sidecar-conf
165---
166apiVersion: v1
167kind: ConfigMap
168metadata:
169 name: envoy-sidecar-conf
170data:
171 envoy.yaml: |
172 static_resources:
173 listeners:
174 - name: listener_0
175 address:
176 socket_address:
177 address: 0.0.0.0
178 port_value: 8080
179 filter_chains:
180 - filters:
181 - name: envoy.filters.network.http_connection_manager
182 typed_config:
183 "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
184 access_log:
185 - name: envoy.access_loggers.stdout
186 typed_config:
187 "@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
188 codec_type: AUTO
189 stat_prefix: ingress_http
190 route_config:
191 name: local_route
192 virtual_hosts:
193 - name: http
194 domains:
195 - "*"
196 routes:
197 - match:
198 prefix: "/grammar_games_protos.GrammarGames/"
199 route:
200 cluster: grammar-games-core-grpc
201 cors:
202 allow_origin_string_match:
203 - prefix: "*"
204 allow_methods: GET, PUT, DELETE, POST, OPTIONS
205 allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
206 max_age: "1728000"
207 expose_headers: custom-header-1,grpc-status,grpc-message
208 http_filters:
209 - name: envoy.filters.http.health_check
210 typed_config:
211 "@type": type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck
212 pass_through_mode: false
213 headers:
214 - name: ":path"
215 exact_match: "/healthz"
216 - name: "x-envoy-livenessprobe"
217 exact_match: "healthz"
218 - name: envoy.filters.http.grpc_web
219 - name: envoy.filters.http.cors
220 - name: envoy.filters.http.router
221 typed_config: {}
222 clusters:
223 - name: grammar-games-core-grpc
224 connect_timeout: 0.5s
225 type: logical_dns
226 lb_policy: ROUND_ROBIN
227 http2_protocol_options: {}
228 load_assignment:
229 cluster_name: grammar-games-core-grpc
230 endpoints:
231 - lb_endpoints:
232 - endpoint:
233 address:
234 socket_address:
235 address: 0.0.0.0
236 port_value: 52001
237 health_checks:
238 timeout: 1s
239 interval: 10s
240 unhealthy_threshold: 2
241 healthy_threshold: 2
242 grpc_health_check: {}
243 admin:
244 access_log_path: /dev/stdout
245 address:
246 socket_address:
247 address: 127.0.0.1
248 port_value: 8090
249
250
ANSWER
Answered 2021-Oct-14 at 22:35Here is some documentation about Setting up HTTP(S) Load Balancing with Ingress. This tutorial shows how to run a web application behind an external HTTP(S) load balancer by configuring the Ingress resource.
Related to Creating a HTTP Load Balancer on GKE using Ingress, I found two threads where instances created are marked as unhealthy.
In the first one, they mention the necessity to manually enable a firewall rule to allow http load balancer ip range to pass health check.
In the second one, they mention that the Pod’s spec must also include containerPort. Example:
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 name: grammar-games-ingress
5 #annotations:
6 # If the class annotation is not specified it defaults to "gce".
7 # kubernetes.io/ingress.class: "gce"
8 # kubernetes.io/ingress.global-static-ip-name: <IP addr>
9spec:
10 defaultBackend:
11 service:
12 name: grammar-games-core
13 port:
14 number: 80
15---
16apiVersion: cloud.google.com/v1
17kind: BackendConfig
18metadata:
19 name: grammar-games-bec
20 annotations:
21 cloud.google.com/neg: '{"ingress": true}'
22spec:
23 sessionAffinity:
24 affinityType: "CLIENT_IP"
25 healthCheck:
26 checkIntervalSec: 15
27 port: 8080
28 type: HTTP
29 requestPath: /healthz
30 timeoutSec: 60
31---
32apiVersion: v1
33kind: Service
34metadata:
35 name: grammar-games-core
36 annotations:
37 cloud.google.com/neg: '{"ingress": true}'
38 cloud.google.com/app-protocols: '{"http":"HTTP"}'
39 cloud.google.com/backend-config: '{"default": "grammar-games-bec"}'
40spec:
41 type: ClusterIP
42 selector:
43 app: grammar-games-core
44 ports:
45 - name: http
46 protocol: TCP
47 port: 80
48 targetPort: 8080
49---
50apiVersion: apps/v1
51kind: Deployment
52metadata:
53 name: grammar-games-core
54spec:
55 # Two replicas for right now, just so I can see how RPC calls get directed.
56 # replicas: 2
57 selector:
58 matchLabels:
59 app: grammar-games-core
60 template:
61 metadata:
62 labels:
63 app: grammar-games-core
64 spec:
65 serviceAccountName: grammar-games-core-k8sa
66 containers:
67 - name: grammar-games-core
68 image: gcr.io/grammar-games/grammar-games-core:1.1.2
69 command:
70 - "/bin/grammar-games-core"
71 ports:
72 - containerPort: 52001
73 env:
74 - name: GAMESDB_USER
75 valueFrom:
76 secretKeyRef:
77 name: gamesdb-config
78 key: username
79 - name: GAMESDB_PASSWORD
80 valueFrom:
81 secretKeyRef:
82 name: gamesdb-config
83 key: password
84 - name: GAMESDB_DB_NAME
85 valueFrom:
86 secretKeyRef:
87 name: gamesdb-config
88 key: db-name
89 - name: GRPC_SERVER_PORT
90 value: '52001'
91 - name: GAMES_LOG_FILE_PATH
92 value: ''
93 - name: GAMESDB_LOG_LEVEL
94 value: 'debug'
95 resources:
96 requests:
97 # The proxy's memory use scales linearly with the number of active
98 # connections. Fewer open connections will use less memory. Adjust
99 # this value based on your application's requirements.
100 memory: "2Gi"
101 # The proxy's CPU use scales linearly with the amount of IO between
102 # the database and the application. Adjust this value based on your
103 # application's requirements.
104 cpu: "1"
105 readinessProbe:
106 exec:
107 command: ["/bin/grpc_health_probe", "-addr=:52001"]
108 initialDelaySeconds: 5
109 - name: cloud-sql-proxy
110 # It is recommended to use the latest version of the Cloud SQL proxy
111 # Make sure to update on a regular schedule!
112 image: gcr.io/cloudsql-docker/gce-proxy:1.24.0
113 command:
114 - "/cloud_sql_proxy"
115
116 # If connecting from a VPC-native GKE cluster, you can use the
117 # following flag to have the proxy connect over private IP
118 # - "-ip_address_types=PRIVATE"
119
120 # Replace DB_PORT with the port the proxy should listen on
121 # Defaults: MySQL: 3306, Postgres: 5432, SQLServer: 1433
122 - "-instances=grammar-games:us-east1:grammar-games-db=tcp:3306"
123 securityContext:
124 # The default Cloud SQL proxy image runs as the
125 # "nonroot" user and group (uid: 65532) by default.
126 runAsNonRoot: true
127 # Resource configuration depends on an application's requirements. You
128 # should adjust the following values based on what your application
129 # needs. For details, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
130 resources:
131 requests:
132 # The proxy's memory use scales linearly with the number of active
133 # connections. Fewer open connections will use less memory. Adjust
134 # this value based on your application's requirements.
135 memory: "2Gi"
136 # The proxy's CPU use scales linearly with the amount of IO between
137 # the database and the application. Adjust this value based on your
138 # application's requirements.
139 cpu: "1"
140 - name: envoy-sidecar
141 image: envoyproxy/envoy:v1.20-latest
142 ports:
143 - name: http
144 containerPort: 8080
145 resources:
146 requests:
147 cpu: 10m
148 ephemeral-storage: 256Mi
149 memory: 256Mi
150 volumeMounts:
151 - name: config
152 mountPath: /etc/envoy
153 readinessProbe:
154 httpGet:
155 port: http
156 httpHeaders:
157 - name: x-envoy-livenessprobe
158 value: healthz
159 path: /healthz
160 scheme: HTTP
161 volumes:
162 - name: config
163 configMap:
164 name: envoy-sidecar-conf
165---
166apiVersion: v1
167kind: ConfigMap
168metadata:
169 name: envoy-sidecar-conf
170data:
171 envoy.yaml: |
172 static_resources:
173 listeners:
174 - name: listener_0
175 address:
176 socket_address:
177 address: 0.0.0.0
178 port_value: 8080
179 filter_chains:
180 - filters:
181 - name: envoy.filters.network.http_connection_manager
182 typed_config:
183 "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
184 access_log:
185 - name: envoy.access_loggers.stdout
186 typed_config:
187 "@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
188 codec_type: AUTO
189 stat_prefix: ingress_http
190 route_config:
191 name: local_route
192 virtual_hosts:
193 - name: http
194 domains:
195 - "*"
196 routes:
197 - match:
198 prefix: "/grammar_games_protos.GrammarGames/"
199 route:
200 cluster: grammar-games-core-grpc
201 cors:
202 allow_origin_string_match:
203 - prefix: "*"
204 allow_methods: GET, PUT, DELETE, POST, OPTIONS
205 allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
206 max_age: "1728000"
207 expose_headers: custom-header-1,grpc-status,grpc-message
208 http_filters:
209 - name: envoy.filters.http.health_check
210 typed_config:
211 "@type": type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck
212 pass_through_mode: false
213 headers:
214 - name: ":path"
215 exact_match: "/healthz"
216 - name: "x-envoy-livenessprobe"
217 exact_match: "healthz"
218 - name: envoy.filters.http.grpc_web
219 - name: envoy.filters.http.cors
220 - name: envoy.filters.http.router
221 typed_config: {}
222 clusters:
223 - name: grammar-games-core-grpc
224 connect_timeout: 0.5s
225 type: logical_dns
226 lb_policy: ROUND_ROBIN
227 http2_protocol_options: {}
228 load_assignment:
229 cluster_name: grammar-games-core-grpc
230 endpoints:
231 - lb_endpoints:
232 - endpoint:
233 address:
234 socket_address:
235 address: 0.0.0.0
236 port_value: 52001
237 health_checks:
238 timeout: 1s
239 interval: 10s
240 unhealthy_threshold: 2
241 healthy_threshold: 2
242 grpc_health_check: {}
243 admin:
244 access_log_path: /dev/stdout
245 address:
246 socket_address:
247 address: 127.0.0.1
248 port_value: 8090
249
250spec:
251 containers:
252 - name: nginx
253 image: nginx:1.7.9
254 ports:
255 - containerPort: 80
256
Adding to that, here is some more documentation about:
Community Discussions contain sources that include Stack Exchange Network