Support
Quality
Security
License
Reuse
kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
Get all kandi verified functions for this library.
Horizontal Scalability TiDB expands both SQL processing and storage by simply adding new nodes. This makes infrastructure capacity planning both easier and more cost-effective than traditional relational databases which only scale vertically.
MySQL Compatible Syntax TiDB acts like it is a MySQL 5.7 server to your applications. You can continue to use all of the existing MySQL client libraries, and in many cases, you will not need to change a single line of code in your application. Because TiDB is built from scratch, not a MySQL fork, please check out the list of known compatibility differences.
Distributed Transactions TiDB internally shards table into small range-based chunks that we refer to as "Regions". Each Region defaults to approximately 100 MiB in size, and TiDB uses an optimized Two-phase commit to ensure that Regions are maintained in a transactionally consistent way.
Cloud Native TiDB is designed to work in the cloud -- public, private, or hybrid -- making deployment, provisioning, operations, and maintenance simple. The storage layer of TiDB, called TiKV, is a Cloud Native Computing Foundation (CNCF) Graduated project. The architecture of the TiDB platform also allows SQL processing and storage to be scaled independently of each other in a very cloud-friendly manner.
Minimize ETL TiDB is designed to support both transaction processing (OLTP) and analytical processing (OLAP) workloads. This means that while you may have traditionally transacted on MySQL and then Extracted, Transformed and Loaded (ETL) data into a column store for analytical processing, this step is no longer required.
High Availability TiDB uses the Raft consensus algorithm to ensure that data is highly available and safely replicated throughout storage in Raft groups. In the event of failure, a Raft group will automatically elect a new leader for the failed member, and self-heal the TiDB cluster without any required manual intervention. Failure and self-healing operations are also transparent to applications.
QUESTION
Missing NVMe SSD in AWS Kubernetes
Asked 2021-May-11 at 10:34AWS seems to be hiding my NVMe SSD when an r6gd instance is deployed in Kubernetes, created via the config below.
# eksctl create cluster -f spot04test00.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: tidb-arm-dev #replace with your cluster name
region: ap-southeast-1 #replace with your preferred AWS region
nodeGroups:
- name: tiflash-1a
desiredCapacity: 1
availabilityZones: ["ap-southeast-1a"]
instancesDistribution:
instanceTypes: ["r6gd.medium"]
privateNetworking: true
labels:
dedicated: tiflash
The running instance has an 80 GiB EBS gp3 block and ZERO NVMe SSD storage as shown in Figure 1.
Why did Amazon swapped out the 59GiB NVMe for a 80 GiB EBS gp3 storage?
where has my NVMe disk gone?
Even if I pre-allocate ephemeral-storage using non-managed nodeGroups, it still showed an 80 GiB EBS storage (Figure 1).
If I use the AWS Web UI to start a new r6gd instance, it clearly shows the attached NVMe SSD (Figure 2)
After further experimentations, it was found that the 80 GiB EBS volume is attached to r6gd.medium, r6g.medium, r6gd.large, r6g.large instances as a 'ephemeral' resource, regardless of instance size.
eksctl describe nodes:
Capacity:
attachable-volumes-aws-ebs: 39
cpu: 2
ephemeral-storage: 83864556Ki
hugepages-2Mi: 0
memory: 16307140Ki
pods: 29
Allocatable:
attachable-volumes-aws-ebs: 39
cpu: 2
ephemeral-storage: 77289574682
hugepages-2Mi: 0
memory: 16204740Ki
pods: 29
Capacity:
attachable-volumes-aws-ebs: 39
cpu: 2
ephemeral-storage: 83864556Ki
hugepages-2Mi: 0
memory: 16307140Ki
pods: 29
Allocatable:
attachable-volumes-aws-ebs: 39
cpu: 2
ephemeral-storage: 77289574682
hugepages-2Mi: 0
memory: 16204740Ki
pods: 29
Awaiting enlightenment from folks who have successfully utilized NVMe SSD in Kubernetes.
ANSWER
Answered 2021-Mar-27 at 12:50Occam's razor says that the reason you're seeing an 80 GB root volume rather than the 8 GB volume that you selected is because you're looking at the wrong instance. You may disagree with this, but if there's a bug in the AWS Console that replaces small drives with much larger ones, I would expect to hear screams of outrage on Hacker News.
The missing SSD is much easier to explain: you have to format and mount the volume before use.
If you run the lsblk
command, you should see the volume:
[ec2-user@ip-172-31-91-142 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 55G 0 disk
nvme0n1 259:1 0 8G 0 disk
├─nvme0n1p1 259:2 0 8G 0 part /
└─nvme0n1p128 259:3 0 10M 0 part /boot/efi
First, you need to create a filesystem. If you know that you want specific filesystem behavior, then pick a type. Here I'm just using the default (ext2):
sudo mkfs /dev/nvme1n1
# output omitted
Then, you need to mount the filesystem. Here I'm using the traditional mountpoint for transient filesystems, but you will probably want to pick something different:
sudo mount /dev/nvme1n1 /mnt
Lastly, if you want the filesystem to be remounted after a reboot, you'll have to update /etc/fstab
. Of course, if you stop and restart the instance (versus reboot), the filesystem and everything on it will disappear.
You won't see the volume in the Console's "Storage" tab. That tab just shows attached EBS volumes, not ephemeral volumes.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
No vulnerabilities reported
Save this library and start creating your kit
See Similar Libraries in
Save this library and start creating your kit
Open Weaver – Develop Applications Faster with Open Source