Open
Description
Currently you can deploy a serverless endpoint with,
predictor = model.deploy(serverless_inference_config=ServerlessInferenceConfig())
however serverless_inference_config
is not supported when you call predictor.update_endpoint
you have to use a standard endpoint;
predictor.update_endpoint(model_name=model.name, instance_type="ml.m4.xlarge", initial_instance_count=1)`
. Ideally you would be able to call,
predictor.update_endpoint(model_name=model.name, serverless_inference_config=ServerlessInferenceConfig())
to keep a serverless deployment.