Deploying ML Skill fails

Hi
I am trying to deploy a ML Skill based on the “UiPath Image Analysis ImageClassification” package. The dataset was successfully uploaded and the pipeline was trained and evaluated successfully.

When deploying the skill, there are about 20 consecutive warnings in the log, and then the skill changes status from Deploying to Failed.

I made my first test yesterday with another dataset of images and that went fine all the way - even with deployment of the skill.

The ML Log contains the following information:

[2024-02-06 13:15:07 +0000] [928] [ERROR] Exception in worker process
File “/microservice/main.py”, line 29, in init
self.mymodel = model.ImageClassificationModel.load(self.opt)
File “”, line 230, in load
File “/home/aicenter/.local/lib/python3.8/site-packages/torch/serialization.py”, line 592, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File “/home/aicenter/.local/lib/python3.8/site-packages/torch/serialization.py”, line 851, in _load
result = unpickler.load()
File “/home/aicenter/.local/lib/python3.8/site-packages/torch/serialization.py”, line 843, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File “/home/aicenter/.local/lib/python3.8/site-packages/torch/serialization.py”, line 832, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File “/home/aicenter/.local/lib/python3.8/site-packages/torch/serialization.py”, line 175, in default_restore_location
result = fn(storage, location)
File “/home/aicenter/.local/lib/python3.8/site-packages/torch/serialization.py”, line 151, in _cuda_deserialize
device = validate_cuda_device(location)
File “/home/aicenter/.local/lib/python3.8/site-packages/torch/serialization.py”, line 135, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA ’
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device(‘cpu’) to map your storages to the CPU.

Hi @jonas.bergman ,

Im not sure but try this, modify the code in the main.py file at line 29 where the model is being loaded. Update the code to use torch.load("", map_location=torch.device('cpu')) in place of model.ImageClassificationModel.load(self.opt) and ensure that the model is loaded correctly.

Thanks,

Thanks. We are using an “Out of the box Package” from UiPath.
I guess it is not possible to modify that?

I found the solution. I guess I lack a bit of knowledge on how ML works, but also the GUI/logs could have been a bit clearer.
Using a CPU Skill on a GPU Pipeline was not possible. CPU/CPU or GPU/GPU seem to be the working combinations.

1 Like