The Razorthink RZT aiOS platform offers users with the capability to build, train and evaluate deep learning models. The RZTDL SDK provides the following capabilities for quickly building and testing DL models
titanic/train_nan_removed_.csv
.Titanic dataset is a binary classification dataset and can be downloaded from hereThe example below shows how to build, visualize and test a simple DL model architecture with two fully connected hidden layers and a single output unit with sigmoid activation. The model is trained by optimizing binary cross entropy loss using adam optimizer.
from rztdl.dl.model import RZTModel
from rztdl.dl.components.layers import Input, Dense
from rztdl.dl.helpers.activations import Relu, Sigmoid
from rztdl.dl.components.losses import BinaryCrossentropy
from rztdl.dl.components.metrics import Accuracy
from rztdl.dl.optimizers import Adam
model = RZTModel(name="binary_classifier")
model.add(Input(shape=[1], name="Label"))
model.add(Input(shape=[4], name="Input_features"))
model.add(Dense(units=16, activation=Relu(), name="dense1"))
model.add(Dense(units=4, activation=Relu(), name="dense2"))
model.add(Dense(units=1, name="dense3", activation=Sigmoid(), outputs="dense_out"))
model.add(Accuracy( labels='Label', predictions='dense_out',name='Accuracy'))
model.add(BinaryCrossentropy(name="Binary_Cross_Entropy",predictions="dense_out", labels="Label" ))
model.add(Adam(name="adam"))
The model can be visualised using plot
function
model.plot()
Once the model is built, the model can be trained using the fit
function and then the trained model can be used to predict the outcome on a test data using predict
function. Here we first define a function that generates dummy data for testing training and inference
import numpy as np
def data_gen_function():
for i in range(150):
yield {"Label": np.ones(1).astype(float), "Input_features": np.ones(4).astype(np.float32)}
model.fit(data=data_gen_function,epochs=50,optimizers=["adam"],metrics=['Accuracy'],batch_size=32)
y_pred = model.predict(data=prediction_data_gen, layers=['dense_out'], batch_size=32)
Finally the tested model can be published so that it will be accessible in DL model designer in IDE
from razor.api import dlmodels
dlmodels.export_model(model)
The RZT aiOS platform provides the following prebuilt blocks than can be used to perform certain operations on the DL Models
These blocks can be imported and used in a pipeline as described in in this section
import razor.flow as rf
from razor.marketplace.blocks.rzt.CsvReader.projectspacemodelcsvreader import CsvProjectSpaceReader_DlReader
from razor.marketplace.blocks.rzt.train_flow import DLTrainBlock
input_column_mapping = {"Input_features": ['Pclass', 'Age', 'SibSp', 'Fare'],
"Label": ["Survived"]
}
train_data_reader = CsvProjectSpaceReader_DlReader(path="titanic/train_nan_removed.csv",
input_column_mapping=input_column_mapping)
trainer = DLTrainBlock(model=model,
epochs=50,
batch_size=64,
train_data=train_data_reader.data,
test_data=train_data_reader.data,
optimizers=["adam"],
learning_rate=[0.001],
metrics=['Binary_Cross_Entropy',"Accuracy"],
valid_data=train_data_reader.data,
save_path="titanic_model_1",
log_frequency=1,
use_mlc=True
)
trainer.executor = rf.ContainerExecutor(cores=4, memory=12000)
trainer.saved_model = trainer.saved_model.set(transport=rf.KafkaTransport(is_series=False))
# Creae and display the pipeline
pipeline = rf.Pipeline("Pipeline for training titanic dataset",targets=[trainer])
pipeline.show()
razor.api.pipelines.save(pipeline, overwrite=True)
INFO:razor.api.impl.pipeline_manager_impl:Registering pipeline with name: `Pipeline for training titanic dataset`
INFO:razor.api.impl.pipeline_manager_impl:
Saving pipeline...
INFO:razor.api.impl.pipeline_manager_impl:
Pipeline is valid.
INFO:razor.api.impl.pipeline_manager_impl:Pipeline saved!
All available engines can be listed using the api razor.api.engines()
. Replace <engine_name>
with the engine name string in below code
engine = razor.api.engines(<engine_name>)
engine.execute(pipeline)
import razor.flow as rf
from razor.marketplace.blocks.rzt.CsvReader.projectspacemodelcsvreader import CsvProjectSpaceReader_DlReader
from razor.marketplace.blocks.rzt.eval_flow import DLEvalBlock
input_column_mapping = {"Input_features": ['Pclass', 'Age', 'SibSp', 'Fare'],
"Label": ["Survived"]
}
eval_data_reader = CsvProjectSpaceReader_DlReader(path="titanic/train_nan_removed.csv",
input_column_mapping=input_column_mapping)
run_evaluation = DLEvalBlock(model=model,
batch_size=64,
metrics=['Binary_Cross_Entropy',"Accuracy"],
data=eval_data_reader.data,
)
run_evaluation.executor = rf.ContainerExecutor(cores=4, memory=12000)
# Creae and display the pipeline
pipeline = rf.Pipeline("Pipeline for evaluating trained model", targets=[run_evaluation])
pipeline.show()
Save pipeline
razor.api.pipelines.save(infer_pipeline, overwrite=True)
Run pipeline. Replace <engine_name>
with the engine name string in below code
engine = razor.api.engines(<engine_name>)
engine.execute(pipeline)
titanic/test.csv
from project space. Specify the column mapping for input layer Input_features
from razor.marketplace.blocks.rzt.infer_flow import DLInferBlock
input_column_mapping = {"Input_features": ['Pclass', 'Age', 'SibSp', 'Fare']
}
test_data_reader = CsvProjectSpaceReader_DlReader(path="titanic/test.csv",
input_column_mapping=input_column_mapping)
predicter = DLInferBlock(model=model,
data=test_data_reader.data,
load_path='titanic_model_1',
layers=['dense_out'],
batch_size=64
)
predicter.executor = rf.ContainerExecutor(cores=4, memory=12000)
infer_pipeline = rf.Pipeline(targets=[predicter])
infer_pipeline.show()
razor.api.pipelines.save(infer_pipeline, overwrite=True)
INFO:razor.api.impl.pipeline_manager_impl:Registering pipeline with name: `Pipeline for training titanic dataset`
INFO:razor.api.impl.pipeline_manager_impl:
Saving pipeline...
INFO:razor.api.impl.pipeline_manager_impl:
Pipeline is valid.
INFO:razor.api.impl.pipeline_manager_impl:Pipeline saved!
All available engines can be listed using the api razor.api.engines()
. Replace <engine_name>
with the engine name string in below code
engine = razor.api.engines(<engine_name>)
engine.execute(infer_pipeline)
<razor_tools.backend.ipython.mime.run_monitor.RunMonitor at 0x7f1cb4656290>
The RZT aiOS platform provides an api import_model
to import the model from the UI.
imported_model = dlmodels.import_model("binary_classifier", display=True)
This API, takes the name of the model as defined in the model designer and imports the specified model to the Jupyter notebook instance. Also, an attribute display, allows the user to view the code that is imported to the Jupyter instance.
One can create a pipeline using imported_model
and initiate a train/inference/evaluation flow