Using Amazon (AWS) Storage (S3) for Nuget - configurable bucket names

In the latest versions, there is a capability to use AWS S3 as the Nuget package store. However, the AWS bucket name is not configurable, and it is created as orchestrator-{GUID for the current tenant} by Orchestrator. This creates issues in automatic provisioning of AWS resources by Terraform or other Infrastructure as Code solutions, as the AWS S3 bucket cannot be controlled by the provisioning code and hence, some manual activities or post deployment activities are necessary. It will be good to be able to create the bucket beforehand, and then point to it as part of the connection string. This has come up in a real life implementation.

n the latest versions, there is a capability to use AWS S3 as the Nuget package store. The connectivity to AWS uses traditional IAM users with access/secret keys. In recent AWS deployments, we have seen a requirement to support IAM instance roles. It will be good to include this capability in Orchestrator as well. This has come up in a real life implementation.

I would second this. In our setup, we do not have full access to the AWS accounts and must provision infrastructure through CloudFormation via an assumed role.

It would be nice to be able to reference an existing S3 bucket by ARN or URI and provide a role to the EC2 instance to access the S3 bucket. We’ve already provided the Role to our instance in order to sync the packages via PowerShell and AWS CLI.

As an alternative, I was also going to look into AWS EFS as we also have a need for shared configuration and network applications common between machines.

1 Like

is there any update on this? I am trying back this again and I do not see any change. I totally agree with above mentioned two recommendations. Eliminating Access & Secret Keys and accessing s3 (from app server) through roles by giving a bucket name.

And I wonder how it is going to create folders? if it is local it is creating with orchestrator-{GUID for the current tenant} for each tenant and it is creating Orchestrator-Host for libraries. But s3 bucket names should be unique right, so not sure how this host folder will get created and shared!

image

Hello. I came to the same conclusion as you. Our VPC admin prefer to use Roles/ARN/URI and the SecretKey + AccessKey was a blocking point. The only way I was able to make it work was through the creation of an IAM user (Hard to get exception internally). Have any of you ended up using S3 with IAM user? If yes, have you had any issues with user permissions?

I have set the IAM role permissions based on AWS QuickStart role (not really for IAM User but they seem to have a similar thing for Storage Gateway).

{
"Version": "2012-10-17",
"Statement": [
    {
        "Sid": "S3Bucket",
        "Effect": "Allow",
        "Action": [
            "s3:CreateBucket",
            "s3:DeleteBucket",
            "s3:GetAccelerateConfiguration",
            "s3:GetBucketLocation",
            "s3:GetBucketVersioning",
            "s3:ListBucket",
            "s3:ListBucketVersions",
            "s3:ListBucketMultipartUploads"
        ],
        "Resource": "arn:aws:s3:::orchestrator*"
    },
    {
        "Sid": "S3Object",
        "Effect": "Allow",
        "Action": [
            "s3:AbortMultipartUpload",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:GetObjectVersion",
            "s3:ListMultipartUploadParts",
            "s3:PutObject",
            "s3:PutObjectAcl"
        ],
        "Resource": "arn:aws:s3:::orchestrator*/*"
    }
]

}

However, UiPath recommended that IAM User should have S3FullAccess permission. It seems to work normally from Orchestrator.

Was there ever any additional movement here? We also require CrossAccount RoleAssumption (no users local to the resource accounts) so this is currently a hard stop for us interfacing with AWS.