Nix copy to S3-compatible service

I’m trying to use a bucket hosted in OpenIO, behind an S3 compatible proxy, with no success for now.

I can successfully list (read and write) the content of my bucket using the aws cli (credentials are in ~/.aws/credentials:

$ http_proxy=localhost:6007 aws s3 ls --endpoint-url=http://openio s3://my-store
2020-04-20 14:36:38          0 test

Alas, trying to convert this success with nix copy leads to a success (return code 0) with no effect:

$ http_proxy=localhost:6007 nix copy --to 's3://my-store?scheme=http&endpoint=openio' /nix/store/zdwj742fqcc71n9k8zj2a3m1f9ryyrb4-my-service

I say, no effect, because I cannot see any new object created in the bucket (as there may be a delay, I’ve also checked 24h later).

The nix documentation points that if you set a custom endpoint, it must support HTTPS. I’m unclear why that requirement, especially when I can http against AWS’s endpoints… I think I need some guidance of what to check for. Starting with: is there any success story when using custom S3 endpoints with nix?

1 Like

Ok, I simplified the problem (by removing the http_proxy) and deleting any configuration in ~/.aws/config, plus making sure the credentials were ok, and also clearing out the ~/.cache/nix of any content and this went through!

I think the manual should not have the mention about https being mandatory, since it’s at best confusing, at worst just wrong.

Ok, it seems the main non working point is that nix copy does not seem to honor the http_proxy environment variable. I’ve tried all caps variants too, and even the NIX_CURL_ARGS trick from Installing NixOS behind corporate proxy · GitHub

1 Like