Yeah that’s what I thought. But was wondering if the prompt size (number of characters) or the number of characters in output causes the more AI units to be consumed like in the case of Google Vertex token consumption.
Yeah that’s what I thought. But was wondering if the prompt size (number of characters) or the number of characters in output causes the more AI units to be consumed like in the case of Google Vertex token consumption.