Skip to main content

Plan disks

seekdb servers depend on data disks, transaction log disks, and seekdb installation disks. If you are a personal user, you can put all data on a single disk and skip this step. If you are an enterprise user, it is recommended to mount data to three separate disks.

If your machine does not have three disks, or if you are using a RAID disk array, you need to partition the disk or the logical volumes of the disk array. It is recommended that you partition using the following scheme:

  • Data disk

    The data disk is used to store baseline data, and the path is specified by the configuration parameter data_dir. When you start seekdb for the first time, ${data_dir}/{sstable,slog} will be created automatically. The size of the data disk is determined by the datafile_disk_percentage/datafile_size parameters. You can also dynamically expand disk files after deployment through the datafile_next and datafile_maxsize configuration items. For details, see Configure dynamic expansion of disk data files.

  • Transaction log disk

    The path of the transaction log disk is specified by the configuration parameter redo-dir. It is recommended that you set the size of the transaction log disk to 3 to 4 times or more of the seekdb memory. When you start seekdb for the first time, ${redo-dir} will be created automatically. The transaction log disk contains multiple fixed-size files, and you can automatically create and clear transaction logs as needed. When transaction logs reach 80% of the total disk capacity, automatic clearing will be triggered. However, transaction logs can only be deleted when the memory data corresponding to the transaction logs has been merged into the baseline data.

    With the same data volume, the size of transaction logs is approximately three times the size of memory data. Therefore, the upper limit of the space required for the transaction log disk is proportional to the total data volume after two merges. Empirical formula: Transaction log file size = 3~4 times the upper limit of incremental data memory.

  • seekdb installation disk

    The path of the seekdb installation disk is specified by the configuration parameter base-dir. The seekdb RPM package installation directory is located under ${base-dir}. Baseline data files and transaction log files are linked to independent data disks and transaction log disks respectively through soft links. seekdb runtime logs are located under ${base-dir}/log. Runtime logs continue to grow, and seekdb cannot automatically delete runtime logs, so you need to regularly delete runtime logs.

Disk mounting

The disk mount point requirements for seekdb are shown in the following table.

  • Personal users

    For personal users, disk mounting is not required. It is recommended that the minimum available disk space be 5 GB when in use.

  • Enterprise users

    DirectorySizePurposeFile system format
    /home100 GB~300 GBseekdb database installation diskext4 or xfs recommended
    /data/log12 times the memory size allocated to seekdbseekdb process log diskext4 or xfs recommended
    /data/1Depends on the size of data to be storedseekdb process data diskext4 or xfs recommended
    info
    • It is recommended that the root directory be at least 50 GB. If using LVM, it is recommended to use striping parameters when creating. Example: lvcreate -n data -L 3000G obvg --stripes=3 --stripesize=128
    • In production environments, it is recommended to use different disks for the data disk, log disk, and installation disk to avoid performance issues.

Disk mounting operations

Disk mounting must be performed under the root user, and there are two operation methods:

  • Mount disks using LVM tools (recommended).
  • Mount disks using fdisk tools.

Mount disks using LVM tools

  1. Check disk information

    Use the fdisk -l command to identify available disks and partitions, and confirm the target device (such as /dev/sdb1).

    fdisk -l
  2. Install LVM tools

    If LVM is not pre-installed, run the following command to install LVM. If LVM is already installed, skip this step.

    • Debian/Ubuntu systems

      apt-get install lvm2
    • CentOS/RHEL systems

      yum install lvm2
  3. Create a physical volume (PV).

    1. Initialize the partition as a physical volume.

      pvcreate /dev/sdb1
    2. Verify the PV creation result

      pvs
  4. Create a volume group (VG).

    1. Combine multiple physical volumes into one VG.

      vgcreate vg01 /dev/sdb1 /dev/sdc1  
    2. View VG information

      vgs  
  5. Create a logical volume (LV).

    1. Create a 100 GB logical volume from the VG.

      lvcreate -L 100G -n lv01 vg01

      The size of the logical volume here can be set according to actual needs.

    2. View LV information.

      lvs
  6. Format and mount.

    1. Format as ext4 file system.

      mkfs.ext4 /dev/vg01/lv01 
    2. Create a mount point.

      mkdir -p /data/1
    3. Temporarily mount.

      mount /dev/vg01/lv01 /data/1
  7. Set automatic mounting on boot.

    Edit the /etc/fstab file and add the mount configuration:

    vim /etc/fstab

    Add the following content to the configuration file:

    /dev/vg01/lv01  /data/1  ext4  defaults,noatime,nodiratime,nodelalloc,barrier=0  0  0

Mount disks using fdisk tools

  1. Check disk information

    Use the fdisk -l command to identify available disks and partitions, and confirm the target device (such as /dev/sdb1).

    fdisk -l
  2. Create a partition

    Use the fdisk tool to create a new partition, for example fdisk /dev/sdb1, enter n to create a primary partition, and finally save (w).

    fdisk /dev/sdb1
  3. Format and mount.

    1. Format as ext4 file system.

      mkfs.ext4 /dev/sdb1
    2. Create a mount point.

      mkdir -p /data/1
    3. Temporarily mount.

      mount /dev/sdb1 /data/1
  4. Set automatic mounting on boot.

    Edit the /etc/fstab file and add the mount configuration:

    vim /etc/fstab

    Add the following content to the configuration file:

    /dev/sdb1  /data/1  ext4  defaults,noatime,nodiratime,nodelalloc,barrier=0  0  0

Check disks

After disks are mounted, run the following command to check the disk mounting status:

df -h

The following result is returned:

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs 31G 0 31G 0% /dev
tmpfs 31G 0 31G 0% /dev/shm
tmpfs 31G 516K 31G 1% /run
tmpfs 31G 0 31G 0% /sys/fs/cgroup
/dev/vda1 493G 171G 302G 37% /
tmpfs 6.2G 0 6.2G 0% /run/user/0
/dev/sdb1 984G 77M 934G 1% /data/1
/dev/vdc1 196G 61M 186G 1% /data/log1
/dev/vdb1 492G 73M 467G 1% /home/admin/seekdb

Result description

  • /data/1 is the data disk with a size of 1 TB.

  • /data/log1 stores logs.

  • /home/admin/seekdb stores seekdb binary files and runtime logs.

Ensure that the disks corresponding to data_dir, redo_dir, and home_path in the configuration file have been mounted. The directories corresponding to data_dir and redo_dir are empty, and the disk usage of the directory corresponding to data_dir must be less than 4%.

Set directory permissions

After disk mounting is complete, you need to check the permissions of the directories corresponding to the mounted disks.

Run the following command to check the permissions of cluster-related file directories.

Here, the data directory is used as an example:

[root@test001 data]# ls -al

The following result is returned:

drwxr-xr-x 2 admin admin 4096 Feb  9 18:43 .
drwxr-xr-x 2 admin admin 4096 Feb 9 18:43 log1

If you find that the admin user does not have permissions for related files after checking directory permissions, run the following command to change the file owner:

[root@test001 ~]# chown -R admin:admin /data/log1
[root@test001 ~]# chown -R admin:admin /data

Here, /data/log1 and /data are example mount directories. You need to replace them with your actual mount directories.