Configuring Machine Learning Models (MLM)
MLM using Tensorflow
Tensorflow MLMs perform model scoring and inference in X100. Data does not move out of the engine which has several advantages:
• Query integration: Subsequent operations can be performed in the database engine.
• Reduced data transfer: Only (potentially smaller) query results are shipped.
• Scalability: ML inference can be a pipelined operation, so there is no need to materialize the full dataset. This would be needed when pulling data into a dedicated environment, or manual buffering strategies.
• Integrated and convenient usage.
Note: We only support saved model Tensorflow MLMs. Currently, h5 and Keras formats are not supported.
Last modified date: 12/19/2024