Documentation Improvement - Connecting self-hosted Storage API to S3 #12919
Replies: 8 comments 6 replies
-
|
Hey! |
Beta Was this translation helpful? Give feedback.
-
|
@mohannadhussain amazing, thank you for sharing this! |
Beta Was this translation helpful? Give feedback.
-
|
Is this still the best way to add a bucket for object storage? Or can it be done in the Coolify UI? |
Beta Was this translation helpful? Give feedback.
-
|
Heads up for anyone reading, once you switch from local to S3, you will want to actually upload a file which is when you actually see the changes in the bucket. Just creating a bucket in the studio does not actually do anything unless there's content in there. Also, as of writing this there is a bug open for resumable uploads in the self hosted studio so you might want to try uploading using the API instead. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
I changed the environment of supabase-storage as instructed and uploaded the file. The following error occurs when i print supabase-storage docker logs: |
Beta Was this translation helpful? Give feedback.
-
|
If anyone is having the same issue, I found the fix on a closed issue on the last line in a users' docker-compose.yml storage section. Turns out, the issue was probably in regards to me running supabase behind a sophos xgs WAF. |
Beta Was this translation helpful? Give feedback.
-
|
There is working example |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I posted about this in Discord and wanted to share my findings here in hopes of improving documentation for others going forward.
Connecting Supabase Storage API to AWS S3
The instructions here apply to the self-hosted version of Supabase, i.e. the one used via
docker-composeStep 1: What you need
Step 2: Configuration
In your supabase directory, open
docker-compose.ymlin your favorite editor. Locate thestoragesection and Change the following environment varialbes:STORAGE_BACKEND: filetoSTORAGE_BACKEND: s3REGIONto your AWS Region, e.g.us-east-1GLOBAL_S3_BUCKETto the name of your AWS S3 bucketTENANT_IDis the top-level directory (i.e. s3 path prefix), you can change it if you want.Now, you need to setup AWS authentication, which can be done in one of two ways, depending on your preferences:
docker-compose.yml, you can add environment variables forAWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEY, orcredentialsthen map it indocker-compose.ymlundervolumeslike so:./credentials:/root/.aws/credentials- the file's contents would look like this:Step 3: Test it
Boot up your setup with
docker-compose up, use the UI to create a new bucket and upload a file into it. Now, navigate to your S3 bucket in the AWS console and verify your file was uploaded there. The path looks like this{TENANT_ID from docker-compose.yml}/{your bucket name}/{your file name}A few gotchas to keep in mind:
Beta Was this translation helpful? Give feedback.
All reactions