/

Announcing Continuous deployments with Github Integration

Announcing Continuous deployments with Github Integration

Announcing Continuous deployments with Github Integration

Announcing Continuous deployments with Github Integration

Anand Muthukrishnan

Anand Muthukrishnan

Anand Muthukrishnan

Apr 25, 2025

Deploying to AWS can’t get this easier! We are announcing a major capability today that will unlock speed and focus for your dev teams to build their products instead of setting up cloud infrastructure or CI/CD.

From today, you can just keep pushing new code to your github repository and LocalOps will listen to all the pushes, automatically pull new code to build & deploy continuously on cloud environments (staging, production, customer-1-us, customer-2-eu) that are running on your cloud or your customer cloud.

Introducing Services:

To setup continuous deployments on your environment, we have a new run primitive called “Service”.

While an “App” in LocalOps asked you earlier to package all your code as a Kubernetes Helm chart and push to registry before deploying it in environments. A “Service” now just needs you to configure a github repo & branch name and push code. LocalOps will then take care of the rest in deploying the latest commit of the branch, on the service running in a specific App environment.

To get started, you will have to create a “service” in each of your environments. You can create any number of services for any purpose:

  1. Web service - Front end or back end

  2. Internal service

  3. Worker

  4. Job

  5. Cron job

Continuous deployments:

When you create a new service, configure a specific git repository and branch name in it. Whenever your team pushes new code to that branch, LocalOps will automatically pull the latest commit, build and push a new deployment to your environment. We will show build & deployment logs right there in the UI.

Auto-renewing ssl certs:

For a web service, LocalOps will automatically configure a domain and auto-renewing ssl cert from day one.

Pick GPU instance types:

You can configure any kind of EC2 instance (in AWS) - GPU/CPU/Memory intensive instances to run your service. LocalOps will provision those. This is HUGE if you need to run your AI models on GPUs for fast inference. LocalOps will soon auto-scale these as well.

Auto healing:

When configuring services, you can pass the number of containers to run for a given service. Services will restart automatically when they crash. And will meet the count requirements that you have configured all the time.

Built-In Monitoring:

You can monitor each service and its run time logs in the in-built loki + grafana setup we have in the corresponding environment. Just go to “Monitor” tab to monitor service specific metrics and logs. You don't have to purchase Datadog or Newrelic.

Encrypted secrets:

You can configure secrets for your service, to pass them as environment variables. Each secret is encrypted using Parameter store (in AWS).

What’s the big deal?

You can now spin up environments in any region and any cloud, connect your github repo & branches to deploy any service continuously. And monitor everything using open source monitoring stack. All in your cloud or your customer cloud. No need to learn/configure AWS, Github or anything in between!

🍦 You can now point your main branch to run SaaS staging environments, prod branch to run your production environments. And byoc branch to run customer specific environments. Neat workflow - Isn’t it? :)

Let us know what you think. Write to [email protected] or chat with us if you see yourself using these in your org, at https://go.localops.co/meet-engineer.

Sign up now at https://localops.co/ to get started for free.