Setting up MLMs
Before setting up your MLM, you should install a container runtime capable to run the Open Container Initiative (OCI) container images. Docker is one such runtime container. The installation process depends on the operating system and OCI runtime.
We have tested and validated containerized services (including UDFs, Spark, Tensorflow) using Docker.
The process for setting up MLMs is as follows:
1. Download the Tensorflow container image and configure the server to use the containerized Tensorflow.
You can install by selecting the option during a Vector install, selecting the -tflowdownload flag, or post-installation using the iisutensorflow script:
iisutensorflow -download_newest_compatible
2. Containerized services use a cache directory to cache models and mount them into containers. The cache location can be configured using the “udfcache” parameter in the server group in vectorwise.conf.
DEFAULT:/tmp/vw_udfcache
Last modified date: 12/19/2024