Issue while training ObjectDetection ML model

Hi,

I am trying to train the ObjectDetection ML model from OOB models, but I’m getting the error, have checked all the documentations, community forums but could not find any solution for that, please help to resolve the same.

Error:

File “/microservice/yolov3/utils.py”, line 104, in load_yolo_weights conv_layer = model.get_layer(conv_layer_name) File “/home/aicenter/.local/lib/python3.8/site-packages/keras/engine/training.py”, line 2828, in get_layer raise ValueError(f’No such layer: {name}. Existing layers are: ’ ValueError: No such layer: conv2d_75. Existing layers are: [‘input_1’, ‘conv2d’, ‘batch_normalization’, ‘leaky_re_lu’, ‘zero_padding2d’, ‘conv2d_1’, ‘batch_normalization_1’, ‘leaky_re_lu_1’, ‘conv2d_2’, ‘batch_normalization_2’, ‘leaky_re_lu_2’, ‘conv2d_3’, ‘batch_normalization_3’, ‘leaky_re_lu_3’, ‘tf.operators.add’, ‘zero_padding2d_1’, ‘conv2d_4’, ‘batch_normalization_4’, ‘leaky_re_lu_4’, ‘conv2d_5’, ‘batch_normalization_5’, ‘leaky_re_lu_5’, ‘conv2d_6’, ‘batch_normalization_6’, ‘leaky_re_lu_6’, ‘tf.operators.add_1’, ‘conv2d_7’, ‘batch_normalization_7’, ‘leaky_re_lu_7’, ‘conv2d_8’, ‘batch_normalization_8’, ‘leaky_re_lu_8’, ‘tf.operators.add_2’, ‘zero_padding2d_2’, ‘conv2d_9’, ‘batch_normalization_9’, ‘leaky_re_lu_9’, ‘conv2d_10’, ‘batch_normalization_10’, ‘leaky_re_lu_10’, ‘conv2d_11’, ‘batch_normalization_11’, ‘leaky_re_lu_11’, ‘tf.operators.add_3’, ‘conv2d_12’, ‘batch_normalization_12’, ‘leaky_re_lu_12’, ‘conv2d_13’, ‘batch_normalization_13’, ‘leaky_re_lu_13’, ‘tf.operators.add_4’, ‘conv2d_14’, ‘batch_normalization_14’, ‘leaky_re_lu_14’, ‘conv2d_15’, ‘batch_normalization_15’, ‘leaky_re_lu_15’, ‘tf.operators.add_5’, ‘conv2d_16’, ‘batch_normalization_16’, ‘leaky_re_lu_16’, ‘conv2d_17’, ‘batch_normalization_17’, ‘leaky_re_lu_17’, ‘tf.operators.add_6’, ‘conv2d_18’, ‘batch_normalization_18’, ‘leaky_re_lu_18’, ‘conv2d_19’, ‘batch_normalization_19’, ‘leaky_re_lu_19’, ‘tf.operators.add_7’, ‘conv2d_20’, ‘batch_normalization_20’, ‘leaky_re_lu_20’, ‘conv2d_21’, ‘batch_normalization_21’, ‘leaky_re_lu_21’, ‘tf.operators.add_8’, ‘conv2d_22’, ‘batch_normalization_22’, ‘leaky_re_lu_22’, ‘conv2d_23’, ‘batch_normalization_23’, ‘leaky_re_lu_23’, ‘tf.operators.add_9’, ‘conv2d_24’, ‘batch_normalization_24’, ‘leaky_re_lu_24’, ‘conv2d_25’, ‘batch_normalization_25’, ‘leaky_re_lu_25’, ‘tf.operators.add_10’, ‘zero_padding2d_3’, ‘conv2d_26’, ‘batch_normalization_26’, ‘leaky_re_lu_26’, ‘conv2d_27’, ‘batch_normalization_27’, ‘leaky_re_lu_27’, ‘conv2d_28’, ‘batch_normalization_28’, ‘leaky_re_lu_28’, ‘tf.operators.add_11’, ‘conv2d_29’, ‘batch_normalization_29’, ‘leaky_re_lu_29’, ‘conv2d_30’, ‘batch_normalization_30’, ‘leaky_re_lu_30’, ‘tf.operators.add_12’, ‘conv2d_31’, ‘batch_normalization_31’, ‘leaky_re_lu_31’, ‘conv2d_32’, ‘batch_normalization_32’, ‘leaky_re_lu_32’, ‘tf.operators.add_13’, ‘conv2d_33’, ‘batch_normalization_33’, ‘leaky_re_lu_33’, ‘conv2d_34’, ‘batch_normalization_34’, ‘leaky_re_lu_34’, ‘tf.operators.add_14’, ‘conv2d_35’, ‘batch_normalization_35’, ‘leaky_re_lu_35’, ‘conv2d_36’, ‘batch_normalization_36’, ‘leaky_re_lu_36’, ‘tf.operators.add_15’, ‘conv2d_37’, ‘batch_normalization_37’, ‘leaky_re_lu_37’, ‘conv2d_38’, ‘batch_normalization_38’, ‘leaky_re_lu_38’, ‘tf.operators.add_16’, ‘conv2d_39’, ‘batch_normalization_39’, ‘leaky_re_lu_39’, ‘conv2d_40’, ‘batch_normalization_40’, ‘leaky_re_lu_40’, ‘tf.operators.add_17’, ‘conv2d_41’, ‘batch_normalization_41’, ‘leaky_re_lu_41’, ‘conv2d_42’, ‘batch_normalization_42’, ‘leaky_re_lu_42’, ‘tf.operators.add_18’, ‘zero_padding2d_4’, ‘conv2d_43’, ‘batch_normalization_43’, ‘leaky_re_lu_43’, ‘conv2d_44’, ‘batch_normalization_44’, ‘leaky_re_lu_44’, ‘conv2d_45’, ‘batch_normalization_45’, ‘leaky_re_lu_45’, ‘tf.operators.add_19’, ‘conv2d_46’, ‘batch_normalization_46’, ‘leaky_re_lu_46’, ‘conv2d_47’, ‘batch_normalization_47’, ‘leaky_re_lu_47’, ‘tf.operators.add_20’, ‘conv2d_48’, ‘batch_normalization_48’, ‘leaky_re_lu_48’, ‘conv2d_49’, ‘batch_normalization_49’, ‘leaky_re_lu_49’, ‘tf.operators.add_21’, ‘conv2d_50’, ‘batch_normalization_50’, ‘leaky_re_lu_50’, ‘conv2d_51’, ‘batch_normalization_51’, ‘leaky_re_lu_51’, ‘tf.operators.add_22’, ‘conv2d_52’, ‘batch_normalization_52’, ‘leaky_re_lu_52’, ‘conv2d_53’, ‘batch_normalization_53’, ‘leaky_re_lu_53’, ‘conv2d_54’, ‘batch_normalization_54’, ‘leaky_re_lu_54’, ‘conv2d_55’, ‘batch_normalization_55’, ‘leaky_re_lu_55’, ‘conv2d_56’, ‘batch_normalization_56’, ‘leaky_re_lu_56’, ‘conv2d_59’, ‘batch_normalization_58’, ‘leaky_re_lu_58’, ‘tf.image.resize’, ‘tf.concat’, ‘conv2d_60’, ‘batch_normalization_59’, ‘leaky_re_lu_59’, ‘conv2d_61’, ‘batch_normalization_60’, ‘leaky_re_lu_60’, ‘conv2d_62’, ‘batch_normalization_61’, ‘leaky_re_lu_61’, ‘conv2d_63’, ‘batch_normalization_62’, ‘leaky_re_lu_62’, ‘conv2d_64’, ‘batch_normalization_63’, ‘leaky_re_lu_63’, ‘conv2d_67’, ‘batch_normalization_65’, ‘leaky_re_lu_65’, ‘tf.image.resize_1’, ‘tf.concat_1’, ‘conv2d_68’, ‘batch_normalization_66’, ‘leaky_re_lu_66’, ‘conv2d_69’, ‘batch_normalization_67’, ‘leaky_re_lu_67’, ‘conv2d_70’, ‘batch_normalization_68’, ‘leaky_re_lu_68’, ‘conv2d_71’, ‘batch_normalization_69’, ‘leaky_re_lu_69’, ‘conv2d_72’, ‘batch_normalization_70’, ‘leaky_re_lu_70’, ‘conv2d_73’, ‘conv2d_65’, ‘conv2d_57’, ‘batch_normalization_71’, ‘batch_normalization_64’, ‘batch_normalization_57’, ‘leaky_re_lu_71’, ‘leaky_re_lu_64’, ‘leaky_re_lu_57’, ‘conv2d_74’, ‘conv2d_66’, ‘conv2d_58’, ‘tf.compat.v1.shape’, ‘tf.compat.v1.shape_1’, ‘tf.compat.v1.shape_2’, ‘tf.operators.getitem_1’, ‘tf.operators.getitem_10’, ‘tf.operators.getitem_19’, ‘tf.range_1’, ‘tf.range’, ‘tf.range_3’, ‘tf.range_2’, ‘tf.range_5’, ‘tf.range_4’, ‘tf.expand_dims_1’, ‘tf.expand_dims’, ‘tf.expand_dims_3’, ‘tf.expand_dims_2’, ‘tf.expand_dims_5’, ‘tf.expand_dims_4’, ‘tf.tile_1’, ‘tf.tile’, ‘tf.tile_4’, ‘tf.tile_3’, ‘tf.tile_7’, ‘tf.tile_6’, ‘tf.operators.getitem_6’, ‘tf.operators.getitem_7’, ‘tf.operators.getitem_15’, ‘tf.operators.getitem_16’, ‘tf.operators.getitem_24’, ‘tf.operators.getitem_25’, ‘tf.operators.getitem’, ‘tf.concat_2’, ‘tf.operators.getitem_9’, ‘tf.concat_5’, ‘tf.operators.getitem_18’, ‘tf.concat_8’, ‘tf.reshape’, ‘tf.operators.getitem_8’, ‘tf.reshape_1’, ‘tf.operators.getitem_17’, ‘tf.reshape_2’, ‘tf.operators.getitem_26’, ‘tf.operators.getitem_2’, ‘tf.tile_2’, ‘tf.operators.getitem_3’, ‘tf.operators.getitem_11’, ‘tf.tile_5’, ‘tf.operators.getitem_12’, ‘tf.operators.getitem_20’, ‘tf.tile_8’, ‘tf.operators.getitem_21’, ‘tf.math.sigmoid’, ‘tf.cast’, ‘tf.math.exp’, ‘tf.math.sigmoid_3’, ‘tf.cast_1’, ‘tf.math.exp_1’, ‘tf.math.sigmoid_6’, ‘tf.cast_2’, ‘tf.math.exp_2’, ‘tf.operators.add_23’, ‘tf.math.multiply_1’, ‘tf.operators.add_24’, ‘tf.math.multiply_4’, ‘tf.operators.add_25’, ‘tf.math.multiply_7’, ‘tf.math.multiply’, ‘tf.math.multiply_2’, ‘tf.operators.getitem_4’, ‘tf.operators.getitem_5’, ‘tf.math.multiply_3’, ‘tf.math.multiply_5’, ‘tf.operators.getitem_13’, ‘tf.operators.getitem_14’, ‘tf.math.multiply_6’, ‘tf.math.multiply_8’, ‘tf.operators.getitem_22’, ‘tf.operators.getitem_23’, ‘tf.concat_3’, ‘tf.math.sigmoid_1’, ‘tf.math.sigmoid_2’, ‘tf.concat_6’, ‘tf.math.sigmoid_4’, ‘tf.math.sigmoid_5’, ‘tf.concat_9’, ‘tf.math.sigmoid_7’, ‘tf.math.sigmoid_8’, ‘tf.concat_4’, ‘tf.concat_7’, ‘tf.concat_10’]. 2023-12-01 07:14:48,509 - UiPath_core.trainer_run:main:107 - INFO: Job run stopped.

@Rohit_More

Can you please show the pipeline parameters

What dataset you used, how many samples are used and ehat is the structure of it

Cheers

Hi @Anil_G ,

Thanks for the response.
Didn’t update any parameters in pipeline, are we supposed to pass any parameters? I have used .jpg images (with xml annotation files), 100+ images in dataset. Here is the format of annotation:

this issue is resolved now but getting some other error as below:

local variable ‘save_directory’ referenced before assignment