Was this helpful?
OS Settings
Certain features of your operating system may need to be configured.
Virtual Address Space Allocation (Linux)
For optimal performance, X100 by default allocates a large amount of virtual address space to be able to use it most efficiently when needed. In some circumstances, this amount can be larger than the amount of physical memory.
Some Linux distributions by default disallow reserving unlimited virtual address space. We recommend that you configure your system to allow unlimited allocation of virtual address space unless there are compelling reasons (unrelated to X100) not to.
To check if your system allows unlimited allocation of virtual address space:
# cat /proc/sys/vm/overcommit_memory
should return 1.
# ulimit –v
should return unlimited.
To ensure the system does not limit allocation of virtual address space
Issue these commands after every system restart:
# echo 1 > /proc/sys/vm/overcommit_memory
# ulimit –v unlimited
To learn how to make this setting persistent, refer to your Linux documentation.
Alternatively, X100 can be configured to not reserve virtual address space by setting the [memory] max_overalloc (see max_overalloc) configuration parameter to 0. This may limit the maximum performance your system can deliver.
Note:  If you reconfigure your system after Vector is installed to allow unlimited allocation of virtual address space, then you must also set the max_overalloc parameter correctly (to 2G by default).
Linux: To use the vwload command in parallel mode, /proc/sys/vm/overcommit_memory must be set to 1 or [memory] max_overalloc must be 0.
Increase max_map_count Kernel Parameter (Linux)
You may need to increase the max_map_count kernel parameter to avoid running out of map areas for the X100 server process.
To increase the max_map_count parameter
1. Add the following line to /etc/sysctl.conf:
vm.max_map_count=map_count
where map_count should be around 1 per 128 KB of system memory. For example:
vm.max_map_count=2097152
on a 256 GB system.
2. Reload the config as root:
sysctl -p
3. Check the new value:
cat /proc/sys/vm/max_map_count
4. Restart Vector.
Note:  The changed setting affects new processes only and should not adversely affect other processes or the OS. The memory is allocated only when a process needs the map areas.
RLIMIT_MEMLOCK (Linux)
To prevent encryption keys stored in memory from being swapped to disk (a security risk), the Linux configuration parameter RLIMIT_MEMLOCK must be properly configured. If set too low, an error will occur when working with encrypted tables. RLIMIT_MEMLOCK defines how much memory an unprivileged process can lock. Locking a memory area protects it from swapping.
To configure this setting properly, use the following calculation: 40 bytes per encrypted table used in a query, rounded up to a full page of 4K (or 8K if overlapping page boundaries) per concurrent query (and only for the duration of the query).
To verify the current setting, as the installation owner user, issue the following command at a shell prompt:
ulimit -a
The value of "max locked memory" is the RLIMIT_MEMLOCK value. If it is too small, it can be changed by editing either the limits.conf file or the system-system.conf file, depending on whether your Linux uses systemd. Consult your system administrator for assistance.
Using Large Pages
Note:  This feature requires a good understanding of memory management.
To reduce TLB (translation lookaside buffer) misses, modern CPUs offer a feature called "large pages" ("huge pages" on Linux). Large pages allow a page size of 2 MB (on some CPUs even 1 GB) instead of 4 KB. Using a larger page size is especially beneficial when accessing large amounts of memory with a random pattern. X100 supports the use of large pages for certain data structures that would benefit the most.
Note:  Consult the documentation for your operating system for details on how to enable large page support.
Linux: Some modern Linux systems (such as RHEL 6) have transparent huge pages functionality. Huge pages are used automatically and do not have to be configured manually. In such cases, do not use the X100 options for huge pages. Consult the documentation for your operating system to see if the operating system supports transparent huge pages and if it needs to be enabled.
Configuration
You must designate an amount of memory for large pages before starting Vector. This memory is used for large page allocations only, not for normal allocations. Because this division of memory is static, you should wisely choose the amount of memory available in large pages.
Note:  The large pages feature applies to query memory only, not to buffer memory. Do not assign a significantly higher amount of memory to large pages than the X100 [memory] max_memory_size parameter. Doing so may not leave enough "normal" memory for the buffer pool and other processes in the system.
If you encounter problems with large pages, you can switch it off in the vectorwise.conf file by setting [memory] use_huge_tlb (see use_huge_tlb) to FALSE.
Requirements for Huge Pages on Linux
To enable the use of huge pages in Vector the following is needed:
Kernel support. (Most distributions enable this in the standard kernel.)
libhugetlbfs library must be installed.
For easier administration of this feature, extra tools are recommended, often found in a package called libhugetlbfs-utils or similar.
Designate Memory for Huge Pages on Linux
The commands and amounts provided here is an example of designating memory for huge pages on Linux. Before issuing these commands, understand what they do, and adapt the examples, as needed. Reserving pages for use in huge page allocations is system wide, so make sure you are not interfering with other users.
To make 2 GB available for 2 MB huge pages, issue the following commands as root on the command line before starting Vector:
hugeadm --create-global-mounts
hugeadm --pool-pages-min 2M:1024
To switch off, enter the following command as root on the command line:
hugeadm --pool-pages-min 2M:0
To check if memory is allocated for huge pages and how much of it is in use, type at the command line:
cat /proc/meminfo
The information about huge pages is shown in the following example lines:
HugePages_Total:  1024
HugePages_Free:   1024
HugePages_Rsvd:      0
HugePages_Surp:      0
Making memory available for huge pages requires defragmenting the specified amount of memory, so it can take a while. Typically, it is fastest to do this immediately after system startup, when memory is not as fragmented.
For more in-depth information, see the man page of hugeadm (https://linux.die.net/man/8/hugeadm) and vm/hugetlbpage.txt (https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt) in the Linux kernel documentation.
Using NUMA Optimization
Some modern many-core systems trade uniform memory access times with RAM for better scalability. This is known as NUMA (non-uniform memory access).
Consult the documentation for your system and for your operating system to find out if they support NUMA.
X100 provides a set of NUMA optimizations which, under Linux, require libnuma version 2.0.2 or newer. These optimizations are enabled per default when libnuma is detected and the system running X100 is a NUMA system (number of NUMA nodes > 1).
Note:  When the NUMA optimizations are enabled, X100 will use all NUMA nodes in your system, even if you restricted the number of NUMA nodes by using numactl.
If you encounter problems with the NUMA optimizations, it can be disabled in vectorwise.conf by setting [memory] use_numa (see use_numa) to FALSE.
Last modified date: 03/21/2024