/

Spin up cloud resources using JSON

Spin up cloud resources using JSON

Spin up cloud resources using JSON

Spin up cloud resources using JSON

Anand Muthukrishnan

Anand Muthukrishnan

Anand Muthukrishnan

Jun 5, 2025

We are excited to release a new capability today for product engineers to declare the cloud resources they need, just by using a simple json - ops.json

Here is how it works.

You can spin up any number of environments in your cloud and launch any number of services within, just by connecting your Github repo with LocalOps.

Within the same repo, you can now have a JSON file called ops.json with declarations of all cloud resources the service will need. Say, an S3 bucket.

{
    "dependencies": {
        "s3": {
            "buckets": [
                {
                    "id": "test123",
                    "prefix": "testdep123",
                    "exports": {
                        "ATTACHMENTS_BUCKET_NAME": "$name",
                        "ATTACHMENTS_BUCKET_ARN": "$arn"
                    }
                }
            ]
        },
    }
}


When we pull your code, build and deploy, we will scan for ops.json file within the repo. If it is found and if it has dependencies like above, they will be provisioned first before your code is run.

Environment vars:

In addition, did you see those exports above? For the created bucket in above example, we will pull in the name and ARN and pass it down to your code as environment variables. So that your code can access the bucket using AWS SDK to read/write files.

But wait. What about authentication? Can we securely authenticate with S3 to access the bucket? Is this bucket private? YES & YES.

Role based access:

LocalOps already has an IAM role attached to all your containers to securely access AWS resources without any secret or key. We add the following permissions to that IAM role automatically when we provision the S3 bucket.

{
  "Effect": "Allow",
  "Action": [
    "s3:GetObject",
    "s3:PutObject",
    "s3:DeleteObject",
    "s3:ListBucket"
  ],
  "Resource": [
    "arn:aws:s3:::your-bucket-name",
    "arn:aws:s3:::your-bucket-name/*"
  ]
}

So you don’t have to create an IAM role or add these granular IAM policies yourself to start accessing the bucket. All of this is pre-created for you automatically, just by putting in that S3 dependency in ops.json in your repo.

What about other services? Your service may need a SQS queue, couple of SNS topics, an RDS database, a Elastic cache instance.

Support all common AWS services:

We support all of that today. You can add the following dependencies in the ops.json:

  1. S3

  2. SQS queues - standard / FIFO

  3. SNS topics

  4. RDS Instances - Postgres & MySQL

  5. Elastic cache clusters - Redis & Memcache

Here is a sample `ops.json` file listing all common cloud dependencies on AWS:

{
    "dependencies": {
        "s3": {
            "buckets": [
                {
                    "id": "test123",
                    "prefix": "testdep123",
                    "exports": {
                        "MY_BUCKET_NAME1": "$name",
                        "MY_BUCKET_ARN1": "$arn"
                    }
                }
            ]
        },
        "sns": {
            "topics": [
                {
                    "id": "test123",
                    "prefix": "testdep123",
                    "exports": {
                        "MY_SNS_TOPIC_NAME1": "$name",
                        "MY_SNS_TOPIC_ARN1": "$arn"
                    }
                }
            ]
        },
        "sqs": {
            "queues": [
                {
                    "id": "test123",
                    "prefix": "testdep123",
                    "exports": {
                        "MY_SQS_QUEUE_NAME1": "$name",
                        "MY_SQS_QUEUE_ARN1": "$arn"
                    }
                }
            ]
        },
        "rds": {
            "instances": [
                {
                    "id": "test123",
                    "prefix": "testdep123",
                    "engine": "postgres",
                    "version": "17.5",
                    "storage_gb": 10,
                    "instance_type": "db.t4g.small",
                    "publicly_accessible": false,
                    "exports": {
                        "MY_RDS_INSTANCE_NAME": "$name",
                        "MY_RDS_INSTANCE_ARN": "$arn",
                        "MY_RDS_INSTANCE_ENDPOINT": "$endpoint",
                        "MY_RDS_INSTANCE_ADDRESS": "$address",
                        "MY_RDS_INSTANCE_USERNAME": "$username",
                        "MY_RDS_INSTANCE_PASSWORD_ARN": "$passwordArn",
                        "MY_RDS_INSTANCE_DB_NAME": "$dbName"
                    }
                }
            ]
        },
        "elasticache": {
            "clusters": [
                {
                    "id": "test123",
                    "prefix": "testdep123",
                    "engine": "redis",
                    "version": "7.0",
                    "instance_type": "cache.t4g.small",
                    "num_nodes": 1,
                    "exports": {
                        "MY_ELASTICACHE_CLUSTER_NAME": "$name",
                        "MY_ELASTICACHE_CLUSTER_ARN": "$arn",
                        "MY_ELASTICACHE_CLUSTER_ENDPOINT": "$endpoint",
                        "MY_ELASTICACHE_CLUSTER_ADDRESS": "$address",
                        "MY_ELASTICACHE_CLUSTER_PORT": "$port"
                    }
                }
            ]
        }
    }
}


Pre-configured for Production use:

All the services you see above configured for production use by default. So

  1. Encryption is turned ON for all data services like S3, RDS, Elastic cache automatically.

  2. All resources are spun up within the environment’s VPC wherever applicable, for private-only access

  3. Backups are turned On by default with 30-day retention

  4. Other settings that safeguards against downtimes and other common incidents.

Tuned for Preview/Ephemeral environments:

All the dependencies defined in ops.json are spun up for every service that defines it in its repo and that runs within an environment.

When pull requests are created, we create a ephemeral copy of the service AND its dependencies defined in ops.json. When we do this, we turn OFF all settings in the cloud resource that aren’t relevant for this ephemeral use case, to accelerate boot up time of the service and save on cloud costs. For example, backups and encryption are turned off by default for all Preview services.

No need of Terraform or Pulumi:

You don’t have to write/maintain terraform or Pulumi or OpenTofu for maintaining couple of cloud dependencies like S3, SNS or SQS. Just throw in an ops.json into the repo and we will spin them up appropriately in your cloud account whenever required.

Get started for free:

This functionality is available for free in all our plans. Get started now at localops.co and signup for our free plan. Connect your Cloud account & Github account to start launching services.

Talk to us if you need help starting up. We are available at [email protected] / [email protected] / https://go.localops.co/tour (if you prefer to see a personalized demo).

Build your product. Ship often. Delight your customers. We will ship your infrastructure!