API environment setup
To access the evroc API you need to log in and set up your environment. The steps required are described in the IAM chapter.
Storage API
This guide describes how you can access Object Storage programmatically via the S3-compatible API. This setup is suitable for configuring an existing S3-compatible client, or when writing your own code using an S3-compatible SDK.
Generate credentials
A client requires credentials to access the Object Storage API. These credentials are used to prove the legitimacy of the requests sent by the client. The Object Storage API blocks any requests that don't have valid credentials.
Note: These credentials are secrets that shouldn't be shared publicly. Take care with how you store and access your credentials!
This guide requires you to have the evroc CLI downloaded and for the CLI to be logged in to your
evroc organisation. For instructions on this, see the
getting started instructions.
Create a bucket
Before creating a bucket service account, create a bucket that it'll be given access to. For this
guide, a bucket named my-bucket is created using the following command in the CLI:
evroc storage bucket create my-bucket
An existing bucket can also be used.
Create a bucket service account
The credentials you're about to generate are tied to a bucket service account, which represents the client you want to give access to.
To create a new bucket service account, use the
evroc storage bucketserviceaccount create command in the CLI.
For example, to create a service account called bsa-external with
access to the bucket my-bucket, run:
evroc storage bucketserviceaccount create bsa-external --bucket my-bucket
To configure an existing bucket service account to have access to
my-bucket, the evroc storage bucketserviceaccount update command
in the CLI can be used to configure which buckets it should have access to.
Extract the credentials
The service account now has credentials tied to it that are authorized to
access the bucket my-bucket.
These credentials are given as a pair of an access key and a secret key.
The credentials are stored as a separate resource called bucket service account secret. As part of creating the bucket service account in the step above, a secret is also created. To see the name of the secret, run:
evroc storage bucketserviceaccount get bsa-external
The output looks something like:
kind: BucketServiceAccount
apiVersion: storage/v1
metadata:
id: bsa-external
uid: a20c2c9f-5a26-4b74-afaa-87c370a3f54d
userLabels: {}
systemLabels: {}
creationTimestamp: "2026-01-27T15:45:18Z"
generation: 1
project: my-project
region: se-sto
resourceVersion: "6471054"
spec:
buckets:
- my-bucket
status:
conditions:
- lastTransitionTime: "2026-01-27T15:45:19Z"
message: Created successfully
observedGeneration: 1
reason: Ready
status: "True"
type: Ready
s3CredentialsSecretName: bsa-external-s3-creds
Where s3CredentialsSecretName: bsa-external-s3-creds indicates the name of its associated secret.
To get the access key and secret key, run:
evroc storage bucketserviceaccountsecret get bsa-external-s3-creds
This outputs the secrets like:
# ...
data:
accessKeyID: <the-access-key>
secretAccessKey: <the-secret-key>
These can be used together with any S3-compatible clients.
Configure a client
The Object Storage service exposes an S3-compatible API that works with many existing clients and SDKs.
You need to configure this client to talk to the evroc API.
The configuration interface might look different depending on your client.
The configuration values you must set (for example, region se-sto) are:
- The endpoint to
https://s3.se-sto.evroc.com/. - The region to
se-sto. - The credentials to the credentials you generated in generate credentials.
Code examples
This section provides some code snippets with examples of configured SDKs in different languages.
Python: AWS Boto3
import io
import boto3
bucket_name = "my-bucket"
access_key_id = '<access-key-id>'
secret_access_key = '<secret-access-key>'
s3 = boto3.client('s3',
endpoint_url = 'https://s3.se-sto.evroc.com/',
region = 'se-sto',
aws_access_key_id = access_key_id,
aws_secret_access_key = secret_access_key
)
# Upload/Update single file
s3.upload_fileobj(io.BytesIO(b"The quick brown fox"), Bucket=bucket_name, Key="thefox")
# Get object information
object_information = s3.head_object(Bucket=bucket_name, Key="thefox")
# List objects
objects = s3.list_objects(Bucket=bucket_name)
# Delete object
s3.delete_object(Bucket=bucket_name, Key="thefox")
Go: aws-sdk-go
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
var bucketName = "my-bucket"
var accessKeyId = "<access-key-id>"
var accessKeySecret = "<secret-access-key>"
resolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{
URL: "https://s3.se-sto.evroc.com/",
HostnameImmutable: true,
}, nil
})
cfg, err := config.LoadDefaultConfig(context.TODO(),
config.WithEndpointResolverWithOptions(resolver),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(accessKeyId, accessKeySecret, "")),
config.WithRegion("se-sto"),
)
if err != nil {
log.Fatal(err)
}
client := s3.NewFromConfig(cfg)
listObjectsOutput, err := client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input{
Bucket: &bucketName,
})
if err != nil {
log.Fatal(err)
}
for _, object := range listObjectsOutput.Contents {
obj, _ := json.MarshalIndent(object, "", "\t")
fmt.Println(string(obj))
}
}